DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on 10/04/2021. These drawings are acceptable.
Response to Arguments
Applicant's arguments filed 9/18/2025 have been fully considered by the examiner. The remarks are directed to amended language not examined by the examiner, see the current office action below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-4, 8, and 10-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 1, the limitation recite new matter that incorporate a narrow scope that was not previously conveyed in the original disclosure. Specifically, the narrow architecture of using a sequence of machine learning models having specific layer functions as recited in the amended claim limitations “generating, by a processing system including at least one processor and using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising: first plurality of inputs including sensitive features and non-sensitive features; a first plurality of hidden layers: and a first output layer comprising: the first output, wherein the first output comprises a prediction generated in response to the first plurality of inputs; generating, by the processing system and using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive feature” the specific arrangement and functions associated with the sub-layers of the plurality of neural networks are not conveyed or infer in the original disclosure. The claim limitations form claims 5-7 and 9 are broadly directed to the use of machine learning models that comprise a neural network not restricted to specific and more narrow architecture and layer functions, e.g. functions of sub-layers of each neural network model as presented in the amended claim limitations. The limitations are considered new matter and are not entitled to the same filing date as the original disclosure.
Regarding claims 20 and 19, the limitations are similar with the ones noted in claim 1 and are rejected under the same rationale.
Regarding the claims that depend on claims 1, the claims do not resolve the issues noted above and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4, 8, and 10-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the amended limitations “generating, by a processing system including at least one processor and using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising: first plurality of inputs including sensitive features and non-sensitive features; a first plurality of hidden layers: and a first output layer comprising: the first output, wherein the first output comprises a prediction generated in response to the first plurality of inputs; generating, by the processing system and using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive feature; and generating, by the processing system, a recommendation related to the prediction based on the second output” which renders the claim indefinite because one of ordinary skill in the art would be unable to ascertain the intended scope. Specifically, the second neural network is required to regress claimed sensitive feature that are not inputted into the claimed second model (i.e. using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including the first output and the non-sensitive features). How is the claimed second layer of second neural network able to mathematically map features that are not required as part of the input data? How are sensitive features regressed from non-sensitive features, as required by the amended claim limitations? Where or what produces the claimed second output comprising the regressed sensitive feature; “wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output,” It is not clear how the second output is generated/produced as claimed. One of ordinary skill in the art would understand that a neural network comprises linked layers where the input layers feeds data into an hidden/second layer for processing output associated with second layer. It usually involves mapping process based on the input from the input layer (e.g. using an activation function and a weight) to produce an outcome. This amended claim appears to claim a disjointed process appears produce an output without the necessary input being made available; or claim a process where the intended scope cannot be ascertained by a person having ordinary skill in the art. This renders the claim indefinite. Examiner notes that any process that uses a plurality of machine learning models reads on the claim limitation.
Regarding claims 20 and 19, the limitations are similar with the ones noted in claim 1 and are rejected under the same rationale.
Regarding the claims that depend on claims 1, the claims do not resolve the issues noted above and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 8, and 10-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
Claim 1: Does claim fall within a statutory category? Yes: A method.
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
generating, (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III)
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
generating, by a processing system including at least one processor and using a first neural network, a first output, wherein the first neural network comprises: a first input layer (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
comprising: a first plurality of inputs including sensitive features and non-sensitive features; a first plurality of hidden layers: and a first output layer comprising: the first output, (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h))
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements invoking computers or other machinery merely as a tool to perform the claimed process/judicial exception.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 2: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the prediction relates to a business decision. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 3: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the sensitive features comprise data attributes that relate to an underrepresented entity or an underrepresented class, and the non- sensitive features are features which are independent of an underrepresented entity or an underrepresented class. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 4: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 3.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein at least one sensitive feature of the sensitive features has values which are non-binary. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 8: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the sensitive features are omitted from the second plurality of inputs. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 10: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein distributions of the reqressed sensitive features are similar to distributions of the sensitive features included in the first plurality of inputs. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 11: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the first neural network minimizes a loss function of the first output, while the second neural network maximizes the loss function of the first output. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements that serve as mere instructions to apply the judicial exception using a computer/computing environment as a tool.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 12: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the recommendation comprises a recommendation to accept the prediction. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 13: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the recommendation comprises a recommendation to reject the prediction. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 14: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the recommendation comprises a recommendation to repeat the generating the first output, the generating the second output, and the generating the recommendation. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 15: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 14.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the repeating the generating the first output is performed using at least one of: a greater number of the sensitive features or a greater number of the non-sensitive features. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 16: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 14.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the repeating the generating the first output is performed using at least one of: a different type of the sensitive features or a different type of the non-sensitive features. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 17: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the recommendation comprises a recommendation to adjust at least one of: the generating the first output or the generating the second output. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 18: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the adjusting comprises retraining at least one of: the first machine learning model or the second machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements that serve as mere instructions to apply the judicial exception using a computer/computing environment as a tool.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Regarding claims 19 and 20, the claims are similar to claim 1, and are rejected under the same rationale. Additionally:
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
a processing system including at least one processor; and a non-transitory computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising: (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 21: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 1.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction. (Deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements that serve as mere instructions to apply the judicial exception using a computer/computing environment as a tool.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 22: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 21.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the algorithmic bias comprises a bias that is present in the first neural network. (Deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements that serve as mere instructions to apply the judicial exception using a computer/computing environment as a tool.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 23: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
wherein the distributions of the regressed sensitive features and the distributions of the sensitive features are compared using at least one of: entropy density or Kolmogorov-Smirnov statistics. (Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I))
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 24: Does claim fall within a statutory category? Yes: method
Step 2A Prong 1: Evaluate whether the claim recites a judicial exception.
Recites the abstract idea of claim 21.
Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception
The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).
wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network. (Deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).)
The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above.
Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception
The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application.
Specifically, the additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and elements that serve as mere instructions to apply the judicial exception using a computer/computing environment as a tool.
These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
As shown above, claims 1-4, 8, and 10-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed a judicial exception and does not recite, when claim elements are examined individually and as a whole, elements that the courts have identified as "significantly more” than the recited judicial exception. The claims are therefore directed to an abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 8, 10-22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Larson et al. (US 20220222372, hereinafter ‘Larson’) in view of Nguyen et al. (US 11314945, hereinafter ‘Nguyen’) in further view of Chakraborty et al. (US 10831927, hereinafter ‘Chak’).
Regarding independent claim 1, Larson teaches a method comprising: generating, by a processing system including at least one processor and using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising: first plurality of inputs including sensitive features and non-sensitive features; a first plurality of hidden layers: and a first output layer comprising: the first output, wherein the first output comprises a prediction generated in response to the first plurality of inputs; (as depicted in 1B And in 0029] As described above, in some implementations, the detection model may use pattern detection based on one or more first patterns to identify a first subset [generating, by a processing system including at least one processor and using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising:] of the potential sensitive data fields [a first input layer comprising: first plurality of inputs including sensitive features and non-sensitive features;] and pattern detection based on one or more second patterns to identify a second subset of the potential sensitive data fields. Accordingly, the false positive model may apply contextual analysis to a first set of characters that is not included in the first subset of the potential sensitive data fields and that is based on the one or more first patterns…; And in 0002] In some implementations, a system for automatically masking sensitive data and detecting and avoiding false positives includes one or more memories and one or more processors, communicatively coupled to the one or more memories, configured to receive a set of data intended for inclusion in a data store [a first plurality of inputs including sensitive features and non-sensitive features]; detect, within the set of data and using a detection model, potential sensitive data fields, wherein the detection model is configured using at least one of: … modify the set of data to mask the potential sensitive data fields other than the at least one non-sensitive data field [a first plurality of inputs including sensitive features and non-sensitive features]; and output the modified set of data to the data store.; And in [0021] In some implementations, the detection model may include a trained machine learning model [using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising]. For example, the detection model may be trained as described below in connection with FIG. 3…[0055] As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm [using a first neural network, a first output, wherein the first neural network comprises: a first input layer comprising:], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like…)
generating, by the processing system and using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including: the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive features; (in As depicted in Fig 3. 0055-0059 As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms [generating, by the processing system and using a second neural network, a second output], such as a regression algorithm, a decision tree algorithm, a neural network algorithm [using a second neural network,], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model 325 to be used to analyze new observations [the first output and the non-sensitive features as trained model used to analyze new observations]... The machine learning system may apply the trained machine learning model 325 to the new observation to generate an output (e.g., a result) [wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including: the first output and the non-sensitive features]. The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable [a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features; and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive features; using neural networks and clustering to produce a mapping of sensitive features as a predicted target value], such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity [a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features; and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive features; using neural networks and clustering to produce a mapping of sensitive features to clusters and similarity scores as regressed sensitive features] between the new observation and one or more other observations [the first output and the non-sensitive features as the modeled clusters as the first output and non-sensitive features used to by second model to process new observations], such as when unsupervised learning is employed…)
and generating, by the processing system, a recommendation related to the prediction based on the second output. (As depicted in Fig 3, and in 0059-0062: In some implementations, the trained machine learning model 325 may classify (e.g., cluster) the new observation in a cluster [and generating, by the processing system, a recommendation related to the prediction based on the second output], as shown by reference number 340. The observations within a cluster may have a threshold degree of similarity… As another example, if the machine learning system were to classify the new observation in a second cluster (e.g., not potentially sensitive fields), then the machine learning system may provide a second (e.g., different) recommendation (e.g., the second recommendation described above) and/or may perform or cause performance of a second (e.g., different) automated action [and generating, by the processing system, a recommendation related to the prediction based on the second output recommending to perform an action based on predicted cluster], such as the second automated action described above... In this way, the machine learning system may apply a rigorous and automated process to detecting potential sensitive data fields (e.g., as described above in connection with FIG. 1A). Explicit false positive detection (e.g., as described above in connection with FIG. 1B) may be applied to the machine learning system in order to increase accuracy of the system beyond that achievable using training alone, as described above. As a result, computing and networking resources may be conserved that would otherwise have been consumed in correcting false positives, attempting to recover any information lost when false positives were inadvertently masked, conducting additional machine learning to reduce future false positives [and generating, by the processing system, a recommendation related to the prediction based on the second output as recommending additional learning], and so on. Moreover, the. )
Larson teaches using the machine leaning process to train models and make predicted outcomes using neural network models. One of ordinary skill in the art would ascertain that neural networks can be used in regressing features as mapped/modeling feature clusters/information; wherein a neural network would have a least three or more layer for regressing features using the learning algorithms associated with the model. Additionally, Nguyen expressly teaches neural networks can be used in regressing features as mapped/modeling feature clusters/information; wherein a neural network would have a least three or more layer for regressing features using the learning algorithms associated with the model; where the layers can perform specific tasks, in 4:37:-5:3: In some embodiments, the text generation model or message selection models may include one or more neural networks or other machine learning models [wherein the first neural network comprises: a first input layer comprising: first plurality of inputs including .. features; a first plurality of hidden layers: and a first output layer … using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including: … a second plurality of hidden layers to reqress … the second plurality of inputs to produce regressed … features: and a second output layer comprising: the second output, wherein the second output comprises the regressed … features]… These neural network systems may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. In some embodiments, neural networks may include multiple layers (e.g., where a signal path traverses from front layers to back layers)... Additionally, as further described below, some models may include specific sets of neural network layers to perform different tasks [wherein the first neural network comprises: a first input layer comprising: first plurality of inputs including .. features; a first plurality of hidden layers: and a first output layer … using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including: … a second plurality of hidden layers to reqress … the second plurality of inputs to produce regressed … features: and a second output layer comprising: the second output, wherein the second output comprises the regressed … features], such as encoding n-grams into embedding vectors, decoding the embedding vectors into n-grams such as words or phrases, predicting missing or masked n-grams, etc.; And in 8:21-27: Alternatively, or in addition, some embodiments may use one or more neural network models to perform one or more feature selection operations, such as using a learnable mask to perform a soft selection of features [a second plurality of hidden layers to reqress … features in response to the second plurality of inputs to produce regressed … features: and a second output layer comprising: the second output, wherein the second output comprises the regressed … features;]. For example, some embodiments may use a vector representing the learnable mask, where a dot product of the vector and a second vector representing a set of features may be used as an output…
Nguyen and Larson are analogous art because both involve developing data processing and information retrieval techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for training machine learning models for data processing and information retrieval tasks as disclosed by Nguyen with the method of developing data processing and information retrieval techniques from data including sensitive and nonsensitive personal information using machine learning systems and algorithms as disclosed by Larson.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Nguyen and Larson in order to develop machine learning feature processing subsystem that help determine one or more user categories for users based on the features of a user profile, (Nguyen, 5:43-54)
Additionally, Chak teaches the using neural network to regress sensitive data, in 1:9-14: Data anonymization refers to computing mechanisms that are used to remove and/or obfuscate one or more attribute values of one or more data stores such that the resulting views of the data store can no longer be used to identify a name of an individual or associate an individual to sensitive information…And in 4:41-63: After the raw data 101 is fed through the word embedding vector model(s) 103, it is fed through the noise propagation module 121 (e.g., a modified auto-encoder) where the data is first presented as data tensors 105. The noise propagation module 121 is a module that adds noise to the data. As shown in FIG. 1, the noise generation is performed using the anonymizer 109. The anonymizer 109 sits between the encoder 107 and the decoder 111 of the noise propagation module 121. The output of the encoder 107 is one or more “codes.” [the first neural network comprises: a first input layer comprising: first plurality of inputs including sensitive features and non-sensitive features; a first plurality of hidden layers: and a first output layer comprising: the first output, wherein the first output comprises a prediction generated in response to the first plurality of inputs] The one or more codes generated by the encoder 107 are input to the anonymizer 109. In embodiments, the anonymizer 109 ensures that appropriate noise is added to codes such that they are clustered or grouped and each cluster has at least k-members. The anonymizer 109 thus generates noisy code [wherein the first output comprises a prediction generated in response to the first plurality of inputs]. The noisy codes are then fed to the decoder 111, which then reconstructs data [second neural network comprises: a second input layer comprising: a second plurality of inputs including: the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive features]. The reconstructed data is then passed according to a policy (e.g., through a nearest neighbor decoder) to generate anonymous data [the second output, wherein the second output comprises the regressed sensitive features]. Although the noise propagation module 121 is illustrated as including the data tensors 105, the encoder 107, the anonymizer 109, and the decoder 111, it is understood that this is representative only and that more or less components can exist within any suitable noise propagation module.
Additionally, Chak teaches generating, by the processing system, a recommendation related to the prediction based on the second output in 7:20-42: In some embodiments, the vectors space 309 represents a “pre-trained” embedding. A pre-trained embedding is a static model that is generated without feedback, retraining, or reference to the data sets being fed through it. For example, a user may download a static word embedding vector model from an online source, which is already trained and includes the vectors or data points already mapped in vector space according to semantic similarity between words [generating, by the processing system and using a second neural network, a second output, wherein the second neural network comprises: a second input layer comprising: a second plurality of inputs including: the first output and the non-sensitive features: a second plurality of hidden layers to reqress the sensitive features in response to the second plurality of inputs to produce regressed sensitive features: and a second output layer comprising: the second output, wherein the second output comprises the regressed sensitive features]. In other embodiments, the vector space 309 represents a “retrained” embedding. A retrained word embedding model is an embedding that receives training feedback after it has received initial training session(s) and is optimized or generated for a specific data set (e.g., microdata, anonymized databases, etc.) For example, as illustrated in FIG. 1, after the decoder 111 decodes the data, the system “re-trains” the word embedding vector model(s) 103 a second time so that any vectors or words (e.g., M.D.) in a future data set are consistently mapped to its closest neighbor (e.g., higher education) or other word according to the policy implemented [generating, by the processing system, a recommendation related to the prediction based on the second output as recommended retraining]. In some embodiments, retraining includes issuing feedback to make sure the correct data point pairing (e.g., M.D. and higher education) is utilized.
Chak, Nguyen and Larson are analogous art because both involve developing data processing and information retrieval techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for training machine learning models for data processing and information retrieval tasks using data anonymization techniques as disclosed by Chak with the method of developing data processing and information retrieval techniques from data including sensitive and nonsensitive personal information using machine learning systems and algorithms as collectively disclosed by Nguyen and Larson.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Chak, Nguyen and Larson in order to develop machine learning feature processing techniques of generating data representations that lend itself to anonymization, and on the other hand, better preserve the utility of the data (Chak, 4:13-20)
Regarding claim 2, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the prediction relates to a business decision. (in [0012] Sensitive fields, such as PII, may be masked before data that includes those sensitive fields is stored. For example, a system may use non-sensitive portions of the data such that the sensitive fields should be masked for security. Additionally, or alternatively, a system may lack sufficient encryption (e.g., according to legal rules, such as the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), and/or other laws and rules) such that the sensitive fields should be masked [wherein the prediction relates to a business decision as PII compliance].)
Regarding claim 3, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the sensitive features comprise data attributes that relate to an underrepresented entity or an underrepresented class, (in [0001] Sensitive fields may include personally identifiable information (PII) [wherein the sensitive features comprise data attributes that relate to an underrepresented entity or an underrepresented class], such as national identification numbers (e.g., social security numbers (SSNs) in the United States, social insurance numbers (SINs) in Canada, SSNs in the Philippines, permanent account numbers (PANs) in India, national insurance numbers (NINOs) in the United Kingdom, employer identification numbers (EINs) in the United States [wherein the sensitive features comprise data attributes that relate to an underrepresented entity or an underrepresented class], individual taxpayer identification numbers (ITINs) in the United States, tax identification numbers (TINs) in Costa Rica, and/or other unique or quasi-unique identification numbers), credit card numbers, bank account numbers, passport numbers, and/or other PII…)
and the non- sensitive features are features which are independent of an underrepresented entity or an underrepresented class. (in [0017] As shown by reference number 105, the masking device may receive a set of data intended for storage (e.g., in a remote and/or local data store). For example, the masking device may receive the set of data from a database (e.g., a relational database, a graphical database, and/or another database) and/or another data source (e.g., a cloud-based storage and/or a local storage). The set of data may include sensitive fields and non-sensitive fields [and the non- sensitive features are features which are independent of an underrepresented entity or an underrepresented class]... Accordingly, the SSN and the SIN may be sensitive fields, and the names and telephone numbers may be non-sensitive fields [and the non- sensitive features are features which are independent of an underrepresented entity or an underrepresented class].)
Regarding claim 4, the rejection of claim 3 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 3, wherein at least one sensitive feature of the sensitive features has values which are non-binary. (in [0001] Sensitive fields may include personally identifiable information (PII) [wherein at least one sensitive feature of the sensitive features has values which are non-binary], such as national identification numbers (e.g., social security numbers (SSNs) in the United States, social insurance numbers (SINs) in Canada, SSNs in the Philippines, permanent account numbers (PANs) in India, national insurance numbers (NINOs) in the United Kingdom, employer identification numbers (EINs) in the United States, individual taxpayer identification numbers (ITINs) in the United States, tax identification numbers (TINs) in Costa Rica, and/or other unique or quasi-unique identification numbers), credit card numbers, bank account numbers, passport numbers, and/or other PII [wherein at least one sensitive feature of the sensitive features has values which are non-binary]…)
Regarding claim 8, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the sensitive features are omitted from the second plurality of inputs. (in 0056] As shown by reference number 330, the machine learning system may apply the trained machine learning model 325 to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model 325. As shown, the new observation [wherein the sensitive features are omitted from the second plurality of inputs as indicating features omitting sensitive data features from the cluster of inputs] may include a first feature indicating a pattern of XXX XXX XXX, a second feature indicating a numeric data type, a third feature indicating that spaces are used as separators, and so on, as an example. The machine learning system may apply the trained machine learning model 325 to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations [wherein the sensitive features are omitted from the second plurality of inputs], such as when unsupervised learning is employed. )
Regarding claim 10, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1 wherein distributions of the sensitive regressed features are similar to distributions of the sensitive features as included in the first plurality of inputs. (in 0056] As shown by reference number 330, the machine learning system may apply the trained machine learning model 325 to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model 325…. The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations [wherein distributions of the sensitive regressed features are similar to distributions of the sensitive features as included in the first plurality of inputs as determined by the clusters of similar features], such as when unsupervised learning is employed. )
Regarding claim 11, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the first neural network minimizes a loss function of the first output, (0082] As shown in FIG. 6, process 600 may include receiving a set of data intended for storage (block 610). For example, the set of data may be intended for inclusion in a data store. As further shown in FIG. 6, process 600 may include detecting, within the set of data and using a detection model, a set of potential sensitive data fields (block 620). For example, the detection model { wherein the first machine learning model minimizes a loss function of the first output of matching patterns} may use data type matching, pattern matching, and/or keyword matching, as described elsewhere herein…; And in [0021] In some implementations, the detection model may include a trained machine learning model [the first neural network]. For example, the detection model may be trained as described below in connection with FIG. 3…[0055] As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm [the first neural network], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like…))
while the second neural network maximizes the loss function of the first output. (in 0082: … FIG. 6, process 600 may include detecting, within the potential sensitive data fields and using a false positive model, at least one non-sensitive data field (block 630) [while the second neural network maximizes the loss function of the first output as not detected as potential sensitive data]. For example, the false positive model may use contextual analysis, as described elsewhere herein. As further shown in FIG. 6, process 600 may include modifying the set of data to mask the potential sensitive data fields other than the at least one non-sensitive data field (block 640). For example, the masking device may mask first data included in the set of potential sensitive data fields other than the at least one non-sensitive data field and refrain [while the second neural network maximizes the loss function of the first output as not detected as potential sensitive data] from masking second data included in the at least one non-sensitive data field. And in 0055-0059 As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms [generating, by the processing system and using a second neural network, a second output], such as a regression algorithm, a decision tree algorithm, a neural network algorithm [second neural network,], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model 325 to be used to analyze new observations...)
Regarding claim 12, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the recommendation comprises a recommendation to accept the prediction. (in 0057] As an example, the trained machine learning model 325 may predict a value of true for the target variable for the new observation [wherein the recommendation comprises a recommendation to accept the prediction], as shown by reference number 335. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation [wherein the recommendation comprises a recommendation to accept the prediction], may perform a first automated action, and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), among other examples.)
Regarding claim 13, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the recommendation comprises a recommendation to reject the prediction. (in 0058] As another example, if the machine learning system were to predict a value of false for the target variable [wherein the recommendation comprises a recommendation to reject the prediction], then the machine learning system may provide a second (e.g., different) recommendation [wherein the recommendation comprises a recommendation to reject the prediction as recommendation to reject the first prediction] (e.g., to refrain from masking the sensitive field associated with the new observation) and/or may perform or cause performance of a second (e.g., different) automated action (e.g., retaining content of the sensitive field associated with the new observation). )
Regarding claim 14, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the recommendation comprises a recommendation to repeat the generating the first output, the generating the second output, and the generating the recommendation. (0058] As another example, if the machine learning system were to predict a value of false for the target variable, then the machine learning system may provide a second (e.g., different) recommendation [wherein the recommendation comprises a recommendation to repeat the generating the first output, the generating the second output, and the generating the recommendation] (e.g., to refrain from masking the sensitive field associated with the new observation) and/or may perform or cause performance of a second (e.g., different) automated action (e.g., retaining content of the sensitive field associated with the new observation).)
Regarding claim 15, the rejection of claim 14 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 14, wherein the repeating the generating the first output is performed using at least one of: a greater number of the sensitive features or a greater number of the non-sensitive features. (0058] As another example, if the machine learning system were to predict a value of false for the target variable [wherein the repeating the generating the first output is performed using at least one of: a greater number of the sensitive features or a greater number of the non-sensitive features as the target variable will be counted greater for having greater number of non-sensitive features resulting in the false value ], then the machine learning system may provide a second (e.g., different) recommendation (e.g., to refrain from masking the sensitive field associated with the new observation) and/or may perform or cause performance of a second (e.g., different) automated action (e.g., retaining content of the sensitive field associated with the new observation).)
Regarding claim 16, the rejection of claim 14 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 14, wherein the repeating the generating the first output is performed using at least one of: a different type of the sensitive features or a different type of the non-sensitive features. (0058] As another example, if the machine learning system were to predict a value of false for the target variable, then the machine learning system may provide a second (e.g., different) recommendation [wherein the recommendation comprises a recommendation to repeat the generating the first output, the generating the second output, and the generating the recommendation] (e.g., to refrain from masking the sensitive field associated with the new observation) and/or may perform or cause performance of a second (e.g., different) automated action (e.g., retaining content of the sensitive field associated with the new observation [wherein the repeating the generating the first output is performed using at least one of: a different type of the sensitive features or a different type of the non-sensitive features as associated with a new observation])
Regarding claim 17, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the recommendation comprises a recommendation to adjust at least one of: the generating the first output or the generating the second output. (0059-0060: In some implementations, the trained machine learning model 325 may classify (e.g., cluster) the new observation in a cluster, as shown by reference number 340. The observations within a cluster may have a threshold degree of similarity. As an example, if the machine learning system classifies the new observation in a first cluster (e.g., potentially sensitive fields), then the machine learning system may provide a first recommendation [wherein the recommendation comprises a recommendation to adjust at least one of: the generating the first output or the generating the second output] , such as the first recommendation described above. Additionally, or alternatively, the machine learning system may perform a first automated action and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action) based on classifying the new observation in the first cluster, such as the first automated action described above. As another example, if the machine learning system were to classify the new observation in a second cluster (e.g., not potentially sensitive fields), then the machine learning system may provide a second (e.g., different) recommendation (e.g., the second recommendation described above) and/or may perform or cause performance of a second (e.g., different) automated action [wherein the recommendation comprises a recommendation to adjust at least one of: the generating the first output or the generating the second output], such as the second automated action described above.)
Regarding claim 18, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the adjusting comprises retraining at least one of: the first neural network or the second neural network. (in [0058] As another example, if the machine learning system were to predict a value of false for the target variable, then the machine learning system may provide a second (e.g., different) recommendation (e.g., to refrain from masking the sensitive field associated with the new observation) and/or may perform or cause performance of a second (e.g., different) automated action (e.g., retaining [wherein the adjusting comprises retraining at least one of: the first neural network or the second neural network] content of the sensitive field associated with the new observation).; And claimed neural network model in 0055-0059)
Regarding claims 19 and 20, the claims are similar to claim 1, and are rejected under the same rationale. Additionally, Larson teaches
when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising: (in 0066] Computing hardware 403 includes hardware and corresponding resources from one or more computing devices. For example, computing hardware 403 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. Computer hardware 403 may include one or more processors [when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising], one or more memories, one or more storage components, and/or one or more networking components, examples of which are described elsewhere herein.; And in 0079] Device 500 may perform one or more processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 530 and/or storage component 540) may store a set of instructions (e.g., one or more instructions, code, software code, and/or program code) for execution by processor 520. Processor 520 may execute the set of instructions to perform one or more processes described herein [when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising]. In some implementations, execution of the set of instructions, by one or more processors 520, causes the one or more processors 520 and/or the device 500 to perform one or more processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein. Th)
And a processing system including at least one processor; and a non-transitory computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising: (in 0066] Computing hardware 403 includes hardware and corresponding resources from one or more computing devices. For example, computing hardware 403 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. Computer hardware 403 may include one or more processors, one or more memories, one or more storage components [one processor; and a non-transitory computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations,], and/or one or more networking components, examples of which are described elsewhere herein.; And in 0079] Device 500 may perform one or more processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 530 and/or storage component 540) may store a set of instructions (e.g., one or more instructions, code, software code, and/or program code) for execution by processor 520 [one processor; and a non-transitory computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations]. Processor 520 may execute the set of instructions to perform one or more processes described herein. In some implementations, execution of the set of instructions, by one or more processors 520, causes the one or more processors 520 and/or the device 500 to perform one or more processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein…)
Regarding claim 21, the rejection of claim 1 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 1, wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction. (in0055-0059 As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction], such as a regression algorithm, a decision tree algorithm, a neural network algorithm [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction,], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model 325 to be used to analyze new observations... The machine learning system may apply the trained machine learning model 325 to the new observation to generate an output (e.g., a result) [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction]. The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations, such as when unsupervised learning is employed…; And in [0034] By using the techniques described above, the masking device can implement separate false positive detection after sensitive field detection. As a result, the masking device increases accuracy [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction] of automatic masking beyond that of existing automated tools. Thus, sensitive fields are masked faster and more accurately, and the output does not need to be manually corrected for false positive errors. This, in turn, conserves computing and networking resources that would otherwise have been wasted in correcting false positives, attempting to recover any information lost when false positives were inadvertently masked, training the software to reduce future false positives [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction], and so on.)
And Nguyen teaches that machine learning models are trained with present bias and parameter that are part of the machine learning model during the training process, in in 5:4-28: As an example, with respect to FIG. 2, machine learning model 202 may take inputs 204 and provide outputs 206. In one use case, outputs 206 may be fed back to machine learning model 202 as input to train machine learning model 202 (e.g., alone or in conjunction with user indications of the accuracy of the outputs 206, labels associated with the inputs, or with other reference feedback information). In another use case, machine learning model 202 may update its configurations (e.g., weights, biases [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction], or other parameters) based on its assessment of its prediction (e.g., outputs 206) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another use case, where machine learning model 202 is a neural network, connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback... Updates to the connection weights may, for example, be reflective of the magnitude of error [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction] propagated backward after a forward pass has been completed. In this way, for example, the machine learning model 202 may be trained to generate better predictions [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction as minimized influence of an algorithm bias in the prediction by adjusting weights]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nguyen and Larson for the same reasons disclosed above.
Regarding claim 22, the rejection of claim 21 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 21, wherein the algorithmic bias comprises a bias that is present in the first neural network. (in [0021] In some implementations, the detection model may include a trained machine learning model [wherein the algorithmic bias comprises a bias that is present in the first neural network]. For example, the detection model may be trained as described below in connection with FIG. 3.
And Nguyen teaches that machine learning models are trained with present bias and parameter that are part of the machine learning model during the training process, in in 5:4-28: As an example, with respect to FIG. 2, machine learning model 202 may take inputs 204 and provide outputs 206. In one use case, outputs 206 may be fed back to machine learning model 202 as input to train machine learning model 202 (e.g., alone or in conjunction with user indications of the accuracy of the outputs 206, labels associated with the inputs, or with other reference feedback information). In another use case, machine learning model 202 may update its configurations (e.g., weights, biases [wherein the algorithmic bias comprises a bias that is present in the first neural network.], or other parameters) based on its assessment of its prediction (e.g., outputs 206) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another use case, where machine learning model 202 is a neural network, connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback... Updates to the connection weights may, for example, be reflective of the magnitude of error [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction] propagated backward after a forward pass has been completed. In this way, for example, the machine learning model 202 may be trained to generate better predictions [wherein the regressed sensitive features minimize an influence of an algorithmic bias in the prediction as minimized influence of an algorithm bias in the prediction by adjusting weights]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nguyen and Larson for the same reasons disclosed above.
Regarding claim 24, the rejection of claim 10 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 10, wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network. (in [0055] As shown by reference number 320, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network], such as a regression algorithm, a decision tree algorithm, a neural network algorithm [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network], a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model 325 to be used to analyze new observations... [0057] As an example, the trained machine learning model 325 may predict a value of true for the target variable for the new observation [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network], as shown by reference number 335. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), among other examples...; And in [0014] By implementing separate false positive detection after sensitive field detection, accuracy of automatic masking can be increased beyond existing automated tools [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network]…
And Nguyen teaches that machine learning models are trained with present bias and parameter that are part of the machine learning model during the training process, in 4:37-65: In some embodiments, the text generation model or message selection models may include one or more neural networks [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network] or other machine learning models. As an example, neural networks may be based on a large collection of neural units (or artificial neurons)… These neural network systems may be self-learning and trained, rather than explicitly programmed, and can perform significantly better in certain areas of problem solving, as compared to traditional computer programs. In some embodiments, neural networks may include multiple layers (e.g., where a signal path traverses from front layers to back layers). In some embodiments, back propagation [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network] techniques may be utilized by the neural networks, where forward stimulation is used to reset weights on the “front” neural units. In some embodiments, stimulation and inhibition for neural networks may be more free-flowing, with connections interacting in a more chaotic and complex fashion…; And in 15:47-53: In some embodiments the tokens may include word vectors determined from a set of encoder neural network layers, where the encoder neural network layers may implement a feed forward architecture to increase encoding efficiency. For example, some embodiments may use an autoencoder [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network] similar to those described by Vaswani et al…
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Nguyen and Larson for the same reasons disclosed above.
Chak teaches the use of two neural networks, encoder and decoder networks, for encoding input features, in 4:41-63: After the raw data 101 is fed through the word embedding vector model(s) 103, it is fed through the noise propagation module 121 (e.g., a modified auto-encoder) [wherein the loss function of the first neural network contains losses of targets in both the first neural network and the second neural network] where the data is first presented as data tensors 105. The noise propagation module 121 is a module that adds noise to the data. As shown in FIG. 1, the noise generation is performed using the anonymizer 109. The anonymizer 109 sits between the encoder 107 and the decoder 111 of the noise propagation module 121. The output of the encoder 107 is one or more “codes.” The one or more codes generated by the encoder 107 are input to the anonymizer 109. In embodiments, the anonymizer 109 ensures that appropriate noise is added to codes such that they are clustered or grouped and each cluster has at least k-members. The anonymizer 109 thus generates noisy code. The noisy codes are then fed to the decoder 111, which then reconstructs data. The reconstructed data is then passed according to a policy (e.g., through a nearest neighbor decoder) to generate anonymous data. Although the noise propagation module 121 is illustrated as including the data tensors 105, the encoder 107, the anonymizer 109, and the decoder 111, it is understood that this is representative only and that more or less components can exist within any suitable noise propagation module.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Chak, Nguyen and Larson for the same reasons disclosed above.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Larson et al. (US 20220222372, hereinafter ‘Larson’) in view of Nguyen et al. (US 11314945, hereinafter ‘Nguyen’) in further view of Chakraborty et al. (US 10831927, hereinafter ‘Chak’) and Hazard et al. (US 11727286, hereinafter ‘Haz’).
Regarding claim 23, the rejection of claim 10 is incorporated and Larson in combination with Nguyen and Chak teaches method of claim 10, wherein the distributions of the regressed sensitive features and the distributions of the sensitive features are compared using (in (in 0056] As shown by reference number 330, the machine learning system may apply the trained machine learning model 325 to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model 325…. The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs and/or information that indicates a degree of similarity between the new observation and one or more other observations [wherein the distributions of the regressed sensitive features and the distributions of the sensitive features are compared using as determined by the clusters of similar features], such as when unsupervised learning is employed. )
Larson in combination with Nguyen and Chak do not expressly teach the statistic comprising at least one of: entropy density or Kolmogorov-Smirnov statistics.
Haz does expressly teach the statistic comprising at least one of: entropy density or Kolmogorov-Smirnov statistics, in 34:61-35:4: The Kolmogorov-Smirnov test may be considered a marginal distribution test. A Kolmogorov-Smirnov [at least one of: entropy density or Kolmogorov-Smirnov statistics] test may be performed one or more times in order to calculate a Kolmogorov-Smirnov metric for use as a statistical quality metric. In some embodiments, Kolmogorov-Smirnov tests are performed a number of times equal to a function of the number of continuous features (n) in the datasets. For example, Kolmogorov-Smirnov tests may be performed one for each continuous feature in the datasets, resulting in n KS statistics, which may then be used to determine p values. Then, the Kolmogorov-Smirnov metric may be computed as…; And in 6:55-7:3: In some embodiments, the techniques may include determining the k-anonymity, t-closeness, 1-diversity, and/or other privacy measures for synthetic data cases (e.g., either as the synthetic data cases are generated, after many or all the synthetic data cases are generated, or a combination of the two) (e.g., as part of determining validity 152, fitness 160, and/or similarity 160 in FIG. 1A, 1B, 1C, and/or 1D). In some embodiments, determining the k-anonymity or t-closeness of a synthetic data case may include determining whether there are k or more training data cases that are “close” to the synthetic data case (e.g., within a threshold distance—e.g., a “similarity threshold”), and, if there are at least k training data cases that are “close”, keeping the synthetic data case because one would not be able to associate the synthetic data case with less than k possible training data cases. In some embodiments, if there is at least one training data case, but fewer than k training data cases, that are within the similarity threshold of the synthetic data case, then the synthetic data case may be discarded, or values for one or more features may be redetermined and then the modified synthetic data case may be tested for similarity and/or k-anonymity and/or t-closeness to the synthetic data cases. In some embodiments, even if there are k or more training data cases that are within the similarity threshold distance of the synthetic data case, if one or more of the training data cases are within a closer threshold distance to the synthetic data case, then the synthetic data case may be discarded, or values for one or more features or data elements in a time series may be redetermined and then the modified synthetic data case may be tested for similarity and/or k-anonymity to the synthetic data cases…33:16-24: … In some embodiments, statistical quality metrics may be measures of the statistical properties of the set of training data cases and the set of two or more synthetic data cases. For example, a statistical quality metric may measure the similarity of the training dataset and synthetic dataset distributions. Some embodiments the statistical quality metric is a function of the p-value, which increases as similarity between the two distributions increases [wherein the distributions of the regressed sensitive features and the distributions of the sensitive features are compared using at least one of: entropy density or Kolmogorov-Smirnov statistics]…
Haz, Chak, Nguyen and Larson are analogous art because both involve developing data processing and information retrieval techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for training machine learning models for data processing and information retrieval tasks using similarity conditions for data cases in the original training data as disclosed by Haz with the method of developing data processing and information retrieval techniques from data including sensitive and nonsensitive personal information using machine learning systems and algorithms as collectively disclosed by Chak, Nguyen and Larson.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Haz, Chak, Nguyen and Larson in order to develop machine learning feature processing techniques of generating data representations that ensure each generated data representation is sufficiently different from all cases in the set of original training data as opposed to sufficiently different from a particular data case from the set of original training data. (Haz, 6:13-17).
Response to Arguments
Applicant's arguments filed 9/18/2025 have been fully considered by the examiner. The remarks are directed to amended language not examined by the examiner, see the current office action above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Banipal et al. (US 20220358237): teaches secure data analytics provided via a process that identifies sensitive data fields of an initial dataset and mappings between the sensitive data fields and other data fields of the dataset, where analytics processing is to be performed on the initial dataset, then, based on an expectation of data fields, of the initial data set, to be used in performance of the analytics processing and on the identified sensitive data fields, selects and applies a masking method to the initial dataset to mask the sensitive data fields and produce a masked dataset, provides the masked dataset to an analytics provider with a request for the analytics processing, and receives, in response, a generated analytics function, generated based on the masked dataset, that is configured to perform the analytics processing, and invokes the generated analytics function against the initial dataset to perform the analytics processing on the initial dataset, in abstract.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129