Prosecution Insights
Last updated: April 19, 2026
Application No. 18/766,344

ARTIFICIAL INTELLIGENCE-BASED DETERMINATION OF DATA SECURITY TECHNIQUES TO BE IMPLEMENTED FOR ELECTRONIC DATA TRANSMISSIONS

Non-Final OA §102§103§112
Filed
Jul 08, 2024
Examiner
REVAK, CHRISTOPHER A
Art Unit
2407
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
987 granted / 1105 resolved
+31.3% vs TC avg
Moderate +9% lift
Without
With
+8.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
1122
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
20.9%
-19.1% vs TC avg
§102
38.0%
-2.0% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1105 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on January 29, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 15 is objected to because of the following informalities: In the preamble of claim 15, a colon is missing after “one or more computing devices to” on line 3 of the claim. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “artificial intelligence engine configured to receiving/ execute” and “a plurality of data security applications configured to implement” in claim 1; and “AI engine is configured to receive/execute” and “plurality of data security applications is configured to be executed” in claim 8. Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. On lines 2-3 of claim 1, reference is made to “one or more first computing processor devices” and similarly on lines 15-16 of claim 1, reference is made to “one or more second computing processor devices”. Claim 1 recites the limitation "the one or more computing processor devices" in lines 5-6 and "the one or more computing processor devices" in line 19. It is unclear whether "the one or more computing processor devices" refers to “one or more first computing processor devices” and/or “one or more second computing processor devices”. There is insufficient antecedent basis for this limitation in the claim. Claims 2-8 are rejected by virtue of their dependency upon claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 9-11, 13-17, 19, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hammitt, U.S. Patent 11,956,268. As per claim 9, it is taught of a computer-implemented method (col. 7, lines 8-11) for securing transmission of a data set (data security system uses AI (e.g., machine learning models) to predict sensitivity levels of data being transmitted between devices in real time, col. 2, lines 30-33), the computer-implemented method is executable by one or more computing processor devices and comprises: receiving a data set that includes a plurality of data elements (data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27); executing at least one trained Machine Learning (ML) model to determine whether to implement one or more data security techniques on the data set (data security system uses AI (e.g., machine learning models) to predict sensitivity levels of data being transmitted between devices in real time and applies appropriate level of data security (i.e., data security techniques) based on the predicted security level, col. 2, lines 30-34 and col. 3, lines 40-43); in response to determining that one or more data security techniques are to be implemented on the data set, executing the at least one trained Machine Learning (ML) model to identify at least one data security technique from amongst a plurality of data security techniques (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), wherein the at least one data security technique is identified based on the at least one data security technique being most suitable for securing the data set during transmission (higher level of data security is applied (i.e., security technique most suitable) to communication data determined to be highly sensitive and a lower level of data security of data security is applied (i.e., security technique most suitable) to data determined to be less sensitive, wherein a higher level of data security means more data security mechanisms and/or more complicated data security mechanisms that are used at a lower level of data security, col. 3, lines 43-50, whereby the higher level of data security mechanism and the lower level of data security mechanism are a plurality of data security techniques, col. 4, lines 1-9); and in response to the ML model identifying the at least one data security technique, implementing the at least one data security technique on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4). As per claim 10, it is disclosed wherein executing the at least one trained ML model to determine further comprises executing the at least one trained ML model to determine whether to implement one or more data security techniques on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the determination is based at least on a comparison between (a) current availability and efficiency of computing resources required to execute the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63) and (b) a need for security associated with one or more of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information, or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15). As per claim 11, it is taught wherein executing the at least one trained ML model to identify further comprises executing the at least one trained ML model to identify the at least one data security technique (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the at least one data security technique is identified as the most suitable for securing the data set during transmission (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), wherein the most suitable is based on one or more of (a) privacy and criticality of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information (i.e., privacy and criticality since the data is highly sensitive content), or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15), (b) identity of one or more data recipient entities of the data set (types of users are identified, such as title, rank, etc to receive the communication data (i.e., data set), col. 5, lines 13-18), (c) historical success rates associated with each of plurality of data security techniques (historical communication data that has been labeled to indicate the sensitivity level of commutations that have been evaluated, (i.e., success rates), col. 4, line 65 through col. 5, line 8), and (d) current availability and efficiency of computing resources required to execute a corresponding one of the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63). As per claim 13, it is taught wherein executing the at least one trained ML model to identify further comprises executing the at least one trained to identify a combination of two or more of the plurality of data security techniques, wherein the combination of two or more data security techniques are identified based on the combination of two or more of the data security techniques being most suitable for securing the data set during transmission (data security configuration component modifies a data security level of a communication session based upon a sensitivity level valued previously determined using a trained machine learning model, the security configurations (i.e., data security techniques) operate according to a variety of data security configurations (i.e., combination of two or more) each providing a different data security level, such as a type of encryption or encryption key to be used, how the communications should be routed, whether quantum encryption should be used, etc., (i.e. combination of two or more most suitable), col. 6, lines 39-53). As per claim 14, it is disclosed wherein the computer-implemented method is executed while the data set is inflight between a data sending entity and one or more data recipient entities (figure 1 shows the data security system intercepting communications sent between the two client devices, which is interpreted as inflight wherein data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27). As per claim 15, it is taught of a computer program product including a non-transitory computer-readable medium, the non-transitory computer-readable medium comprising sets of codes (the data security system uses AI (i.e., trained machine learning model or sets of code) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61) for causing one or more computing devices (col. 11, line 34 through col. 12, line 12) to: receive a data set that includes a plurality of data elements (data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27); execute at least one trained Machine Learning (ML) model to determine whether to implement one or more data security techniques on the data set (data security system uses AI (e.g., machine learning models) to predict sensitivity levels of data being transmitted between devices in real time and applies appropriate level of data security (i.e., data security techniques) based on the predicted security level, col. 2, lines 30-34 and col. 3, lines 40-43); in response to determining that one or more data security techniques are to be implemented on the data set, execute the at least one trained ML model to identify at least one data security technique from amongst a plurality of data security techniques (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), wherein the at least one data security technique is identified based on the at least one data security technique being most suitable for securing the data set during transmission (higher level of data security is applied (i.e., security technique most suitable) to communication data determined to be highly sensitive and a lower level of data security of data security is applied (i.e., security technique most suitable) to data determined to be less sensitive, wherein a higher level of data security means more data security mechanisms and/or more complicated data security mechanisms that are used at a lower level of data security, col. 3, lines 43-50, whereby the higher level of data security mechanism and the lower level of data security mechanism are a plurality of data security techniques, col. 4, lines 1-9); and in response to the trained ML model identifying the at least one data security technique, implement the at least one data security technique on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4). As per claim 16, it is disclosed wherein the set of codes for causing the one or more computing devices to execute the at least one trained ML model to determine further cause the one or more computing devices to execute the at least one trained ML model to determine whether to implement one or more data security techniques on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the determination is based at least on a comparison between (a) current availability and efficiency of computing resources required to execute the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63) and (b) a need for security associated with one or more of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information, or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15). As per claim 17, it is taught wherein the set of codes for causing the one or more computing devices to execute the at least one trained ML model to identify further cause the one or more computing devices to execute the at least one trained ML model to identify the at least one data security technique (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the at least one data security technique is identified as the most suitable for securing the data set during transmission (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), wherein the most suitable is based on one or more of (a) privacy and criticality of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information (i.e., privacy and criticality since the data is highly sensitive content), or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15), (b) identity of one or more data recipient entities of the data set (types of users are identified, such as title, rank, etc to receive the communication data (i.e., data set), col. 5, lines 13-18), (c) historical success rates associated with each of plurality of data security techniques (historical communication data that has been labeled to indicate the sensitivity level of commutations that have been evaluated, (i.e., success rates), col. 4, line 65 through col. 5, line 8), and (d) current availability and efficiency of computing resources required to execute a corresponding one of the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63). As per claim 19, it is taught wherein the set of codes for causing the one or more computing devices to execute the at least one trained ML model to identify further cause the one or more computing devices to execute the at least one trained to identify a combination of two or more of the plurality of data security techniques, wherein the combination of two or more data security techniques are identified based on the combination of two or more of the data security techniques being most suitable for securing the data set during transmission (data security configuration component modifies a data security level of a communication session based upon a sensitivity level valued previously determined using a trained machine learning model, the security configurations (i.e., data security techniques) operate according to a variety of data security configurations (i.e., combination of two or more) each providing a different data security level, such as a type of encryption or encryption key to be used, how the communications should be routed, whether quantum encryption should be used, etc., (i.e. combination of two or more most suitable), col. 6, lines 39-53). As per claim 20, it is disclosed wherein the sets of codes (the data security system uses AI (i.e., trained machine learning model, or sets of codes) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61) are executed while the data set is inflight between a data sending entity and one or more data recipient entities (figure 1 shows the data security system intercepting communications sent between the two client devices, which is interpreted as inflight wherein data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Hammitt, U.S. Patent 11,956,268 in view of Gajula et al, WO 2023/0149715 A1. As per claim 1, it is taught by Hammitt of a system (col. 11, line 34 through col. 12, line 12) for securing transmission of a data set (data security system uses AI (e.g., machine learning models) to predict sensitivity levels of data being transmitted between devices in real time, col. 2, lines 30-33), the system comprising: a first computing platform including a first memory and one or more first computing processor devices in communication with the first memory (data security system/machine (i.e., first computing platform comprises a processor device in communication with a first memory, col. 11, lines 31-37 and col. 11, line 64 through col. 12, line 5), wherein the first memory stores: an Artificial Intelligence (AI) engine including one or more machine learning models (ML)(data security system uses AI (e.g., machine learning models) to predict sensitivity levels of data being transmitted between devices in real time, col. 2, lines 30-33), wherein the AI engine is executable by at least one of the one or more computing processor devices (col. 11, lines 31-37) and configured to: receiving a data set that includes a plurality of data elements (data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27); execute at least one of the one or more ML models, wherein the at least one of the one or more ML models is trained to (i) determine whether to implement one or more data security techniques on the data set and, in response to determining that one or more data security techniques are to be implemented on the data set (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), (ii) identify at least one data security technique from amongst a plurality of data security techniques, wherein the at least one data security technique is identified based on the at least one data security technique being most suitable for securing the data set during transmission (higher level of data security is applied (i.e., security technique most suitable) to communication data determined to be highly sensitive and a lower level of data security of data security is applied (i.e., security technique most suitable) to data determined to be less sensitive, wherein a higher level of data security means more data security mechanisms and/or more complicated data security mechanisms that are used at a lower level of data security, col. 3, lines 43-50, whereby the higher level of data security mechanism and the lower level of data security mechanism are a plurality of data security techniques, col. 4, lines 1-9); and Although Hammitt teaches of a second computing platform (data security system comprising a data security configuration component) that is responsible for storing: a plurality of data security applications, each data security application executable by at least one of the one or more computing processor devices, and configured to implement one of the plurality of data security techniques (data security configuration component modifies a data security level of a communication session based upon a sensitivity level valued previously determined using a trained machine learning model, the security configurations (i.e., data security techniques) operate according to a variety of data security configurations (i.e., plurality of security techniques as dictated by a plurality of security applications that are specifically applied) each providing a different data security level, such as a type of encryption or encryption key to be used, how the communications should be routed, whether quantum encryption should be used, etc., (i.e. combination of two or more most suitable), col. 6, lines 39-53), wherein, in response to the at least one of the one or more ML models identifying the at least one data security technique, execute, on the data set, at least one of the plurality of data security applications corresponding to the least one data security technique to implement the least one data security technique on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4). The teachings of Hammitt fail to disclose wherein the second computing platform including a second memory and one or more second computing processor devices in communication with the second memory. Gajula et al discloses wherein a second computing platform (neural processing unit is a dedicated AI-processor) including a second memory (dedicated memory) and one or more second computing processor devices in communication with the second memory (memory configured to store the rule-set information and is in communications with other components within the MCX server system, page 9, paragraph 46). It would have been obvious to a person of ordinary skill in the art at the effective filing date of the claimed invention to have recognized the benefits of having a dedicated platform component to focus on dedicated takes having its own dedicated processor and memory that which doesn’t not take away from other resources executing in the system, that which can have the necessary processing power to focus on executing AI driven tasks. Gajula et al discloses of the need to have a rule based mission critical communication system for faster and effective communications, see paragraph 6 on page 2. The teachings of Hammitt are suggested to offering various configurations of hardware components, such as processors and memory operating within the scope of the invention (col. 11, lines 31-37), the teachings of Gajula et al offer an additional configured not explicitly taught by Hammitt by offering a dedicated processor and memory within the platform for faster and effective communications. As per claim 2, it is disclosed by Hammitt wherein the at least one of the one or more ML models is trained to determine whether to implement one or more data security techniques on the data set (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the determination is based at least on a comparison between (a) current availability and efficiency of computing resources required to execute the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63) and (b) a need for security associated with one or more of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information, or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15). As per claim 3, it is taught by Hammitt wherein at least one of the one or more ML models is trained to identify the at least one data security technique from amongst a plurality of data security techniques (the data security system adjusts (i.e., implements) the data security level (i.e., data security technique) based on the identified level of data security that is to be applied to the communication data (i.e., data set) as determined by the trained machine learning model, col. 3, lines 50-55 and col. 4, lines 1-4), wherein the at least one data security technique is identified as the most suitable for securing the data set during transmission (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61), wherein the most suitable is based on one or more of (a) privacy and criticality of the data elements in the data set (a sensitivity level of the communications data (i.e., data set) is determined based upon it being highly sensitive content, such as confidential information (i.e., privacy and criticality since the data is highly sensitive content), or lower sensitivity level, such as not including confidential information, col. 6, lines 5-15), (b) identity of one or more data recipient entities of the data set (types of users are identified, such as title, rank, etc., to receive the communication data (i.e., data set), col. 5, lines 13-18), (c) historical success rates associated with each of plurality of data security techniques (historical communication data that has been labeled to indicate the sensitivity level of commutations that have been evaluated, (i.e., success rates), col. 4, line 65 through col. 5, line 8), and (d) current availability and efficiency of computing resources required to execute a corresponding one of the plurality of data security applications (data security configurations vary in lower level include fewer (i.e., current availability) and/or less resource intensive security mechanisms (i.e., efficiency of resources) wherein higher level include additional (i.e., current availability) and/or higher resource intensive data security mechanisms (i.e., efficiency of resources), col. 6, lines 54-63). As per claim 4, it is disclosed by Hammitt wherein the plurality of data security techniques comprise data masking techniques (encryption) including one or more of (i) data obfuscation, (ii) data scrambling, and (iii) data anonymization (the data security level (i.e., data security technique) of the communication session is based on a sensitivity level value that operate at a variety of data security configurations each providing a different security level, a type of encryption (masking such as data obfuscation or data scrambling) or encryption key may be used, col. 6, lines 39-53). As per claim 7, it is taught by Hammitt wherein at least one of the one or more ML models is trained to (ii) identify the at least one data security technique from amongst a plurality of data security techniques, wherein the at least one data security technique is further defined as a combination of two or more of the plurality of data security techniques, wherein the combination of two or more data security techniques are identified based on the combination of two or more of the data security techniques being most suitable for securing the data set during transmission (data security configuration component modifies a data security level of a communication session based upon a sensitivity level valued previously determined using a trained machine learning model, the security configurations (i.e., data security techniques) operate according to a variety of data security configurations (i.e., combination of two or more) each providing a different data security level, such as a type of encryption or encryption key to be used, how the communications should be routed, whether quantum encryption should be used, etc., (i.e. combination of two or more most suitable), col. 6, lines 39-53). As per claim 8, it is disclosed by Hammitt wherein (i) the AI engine is configured to receive the data set and execute at least one of the one or more ML models (the data security system uses AI (i.e., trained machine learning model) to predict (i.e., identify) the level of data security (data security techniques) that is to be applied to the communication session comprising the communication data (i.e., data set), col. 3, lines 40-43 & 51-61) and (ii) the at least one of the plurality of data security applications is configured to be executed while the data set is inflight between a data sending entity and one or more data recipient entities (figure 1 shows the data security system intercepting communications sent between the two client devices, which is interpreted as inflight wherein data is transmitted between two client devices in a communication session, the data includes communication data, such as voice, text, audio, video, etc., (i.e., data elements), col. 3, lines 18-27). Claims 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hammitt, U.S. Patent 11,956,268 in view of Loytynoja et al, US 2012/0011592. As per claim 12, it is disclosed by Hammitt wherein the plurality of data security techniques comprises (i) data masking (encryption) techniques including one or more of data obfuscation, data scrambling, and data anonymization (the data security level (i.e., data security technique) of the communication session is based on a sensitivity level value that operate at a variety of data security configurations each providing a different security level, a type of encryption (masking such as data obfuscation or data scrambling) or encryption key may be used, col. 6, lines 39-53), the teachings of Hammitt fail to disclose of embedded hidden information including one or more of stenography and digital watermarking, and frequency hopping. It is disclosed by of Loytynoja et al embedding hidden information (a non-detectable fingerprint is embedded, paragraph 0057) including one or more of stenography and digital watermarking, and frequency hopping (a watermark is added to a file, an embedding algorithm may combine several digital watermark techniques, such as frequency hopping, see paragraph 0058). It would have been obvious to a person of ordinary skill in the art at the effective filing date of the invention to have been motivated to a apply watermarking and frequency hopping as a means to further secure protected data. The teachings of Loytynoja et al disclose of using the techniques since a rights owner may be able to find out who is illegally distributing the content by inserting a non-detectable user fingerprint, see paragraph 0057. Although the teachings of Hammitt disclose of applying encryption to protect confidential content, the teachings of Loytynoja et al offer a further layer of security to protected data by watermarking and applying frequency hopping as a means to protect content created by a rights owner. As per claim 18, it is disclosed by Hammitt wherein the plurality of data security techniques comprises (i) data masking (encryption) techniques including one or more of data obfuscation, data scrambling, and data anonymization (the data security level (i.e., data security technique) of the communication session is based on a sensitivity level value that operate at a variety of data security configurations each providing a different security level, a type of encryption (masking such as data obfuscation or data scrambling) or encryption key may be used, col. 6, lines 39-53). Hammitt fail to disclose of embedded hidden information including one or more of stenography and digital watermarking, and frequency hopping. It is disclosed by of Loytynoja et al embedding hidden information (a non-detectable fingerprint is embedded, paragraph 0057) including one or more of stenography and digital watermarking, and frequency hopping (a watermark is added to a file, an embedding algorithm may combine several digital watermark techniques, such as frequency hopping, see paragraph 0058). It would have been obvious to a person of ordinary skill in the art at the effective filing date of the invention to have been motivated to a apply watermarking and frequency hopping as a means to further secure protected data. The teachings of Loytynoja et al disclose of using the techniques since a rights owner may be able to find out who is illegally distributing the content by inserting a non-detectable user fingerprint, see paragraph 0057. Although the teachings of Hammitt disclose of applying encryption to protect confidential content, the teachings of Loytynoja et al offer a further layer of security to protected data by watermarking and applying frequency hopping as a means to protect content created by a rights owner. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Hammitt, U.S. Patent 11,956,268 in view of Gajula et al, WO 2023/0149715 A1, in further view of Loytynoja et al, US 2012/0011592. As per claim 5, Hammitt teachings of applying encryption to the confidential information, however the combination of Hammit and Gajula et al fail to disclose wherein the plurality of data security techniques comprises frequency hopping and/or embedded hidden information including one or more of (i) stenography and (ii) digital watermarking. It is taught by Loytynoja et al wherein the plurality of data security techniques comprises frequency hopping and/or embedded hidden information (a non-detectable fingerprint is embedded, paragraph 0057) including one or more of (i) stenography and (ii) digital watermarking (a watermark is added to a file, an embedding algorithm may combine several digital watermark techniques, such as frequency hopping, see paragraph 0058). It would have been obvious to a person of ordinary skill in the art at the effective filing date of the invention to have been motivated to a apply watermarking and frequency hopping as a means to further secure protected data. The teachings of Loytynoja et al disclose of using the techniques since a rights owner may be able to find out who is illegally distributing the content by inserting a non-detectable user fingerprint, see paragraph 0057. Although the combined teachings of Hammitt and Gajula et al disclose of applying encryption to protect confidential content, the teachings of Loytynoja et al offer a further layer of security to protected data by watermarking and applying frequency hopping as a means to protect content created by a rights owner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wouters et al, US 2021/0050024 is relied upon for disclosing of applying watermarking and frequency hopping to protect content, see paragraph 0008. Albero et al, US 2024/0160704 is a related teaching by the Applicant using watermarking and frequency hopping, see paragraph 0065. Parla et al, US 2025/0039239 is relied upon for disclosing of enforcement points applying network data to machine learning models to modify security operations for mitigation against suspicious data packets, see abstract. Calzolari et al, US 2023/0061864 is relied upon for disclosing of a machine learning model to receive inputted communications and implement a policy to the outputted communications, see paragraph 0055. O’Neil, US 2022/0353299 is relied upon for disclosing of creating policies for network communications by applying machine learning to stored information and flow data, see claim 1. Wisniewski et al, US 2020/0045080 is relied upon for disclosing of modifying a set of rules for an encryption policy engine based upon the determination that a portion of a data stream is not encrypted according to a specific encryption policy by performing machine learning, see paragraph 0056. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER REVAK whose telephone number is (571)272-3794. The examiner can normally be reached 5:30am - 3:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Thiaw can be reached at 571-270-1138. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER A REVAK/Primary Examiner, Art Unit 2407
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Dec 18, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602477
DETECTING TARGETED INTRUSION ON MOBILE DEVICES
2y 5m to grant Granted Apr 14, 2026
Patent 12596798
PROBABILISTIC TRACKER MANAGEMENT FOR MEMORY ATTACK MITIGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591698
SECURE DATA PARSER METHOD AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12579251
SYSTEM AND METHOD FOR DETECTING EXCESSIVE PERMISSIONS IN IDENTITY AND ACCESS MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12561439
LOCATION-BASED IHS FUNCTIONALITY LIMITING SYSTEM AND METHOD
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
98%
With Interview (+8.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month