DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. The following office action is a Final Office Action in response to the communications received on 12/18/2025.
Claims 24, 36 and 46 have been amended; claims 1-23, 31, 37, 40-43 and 45 have been canceled. Therefore, claims 24-30, 32-36, 38, 39, 44 and 46-49 are currently pending in this application.
Claim Rejections - 35 USC § 101
3. Non-Statutory (Directed to a Judicial Exception without an Inventive Concept/Significantly More)
35 U.S.C.101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
● Claims 24-30, 32-36, 38, 39, 44 and 46-49 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more.
(Step 1)
The current claims fall within one of the four statutory categories of invention (MPEP 2106.03).
(Step 2A) [Wingdings font/0xE0] Prong-One:
The current claims recite a judicial exception, namely an abstract idea, as shown below:
— Considering each of claims 24 and 36 as representative claims, the following claimed limitations recite an abstract idea:
— Claim 24:
[collect] historical data for a user; [collect] data indicating a plurality of actions of the user, wherein the [collected] data indicates a first threat and a second threat; use the historical data and the [collected] data to determine a first susceptibility of the user to the first threat and the second susceptibility of the user to a second threat; determine, based on the first susceptibility of the user to the first threat and the second susceptibility of the user to the second threat, that the user is more susceptible to the first threat; determine an amount of time available to train the user, wherein the amount of time available to train the user indicates that a single training intervention is sufficient to train the user; based on the determination that the user is more susceptible to the first threat and the amount of time available to train the user, select a training intervention, wherein the training intervention corresponds to the first threat; and wherein the training intervention comprises an interactive training intervention, wherein the training intervention and an associated delivery time are selected using a quantitative model, and wherein the quantitative model is [built] by applying statistical analysis to historical data to identify susceptibility estimates comprising probabilities that a given user will fall victim to a given threat scenario over different periods of time, and correlate the susceptibility estimates, threat scenario, and the different periods of time; [present] the selected training intervention to the user at a first time, corresponding to the associated delivery time, wherein [presenting] the selected training intervention to be presented to the user at the first time comprises presenting the selected training intervention [during] an existing session of the user rather than presenting the selected training session separate from the existing session; schedule deployment of smart phones to a plurality of users including the user and a second user; and [present] training intervention to a second user at a second time, later than the first time, wherein the selected training intervention comprises a smart phone security training intervention; and wherein [presenting] the selected training intervention further comprises initiating an order for fake malware-containing memory devices to be delivered to the second user; and receive feedback information indicating interactions with the selected training intervention, wherein future training interventions is based on the feedback information .
— Claim 36:
[collect] historical data for a user; [collect] data indicating a plurality of actions of the user, wherein the [collected] data indicates a first threat and a second threat; use the historical data and the [collected] data to determine a first susceptibility of the user to the first threat and a second susceptibility of the user to the second threat; determine, based on the first susceptibility of the user to the first threat and the second susceptibility of the user to the second threat, that the user is more susceptible to the first threat; determine an amount of time available to train the user, wherein the amount of time available to train the user indicates that a single training intervention is sufficient to train the user; select, based on the determination that the user is more susceptible to the first threat and the amount of time available to train the user, a training intervention that corresponds to the first threat, wherein the training intervention comprises an interactive training intervention, wherein the training intervention and an associated delivery time are selected using a quantitative model, and wherein the quantitative model is [built] by applying statistical analysis to historical data to identify susceptibility estimates comprising probabilities that a given user will fall victim to a given threat scenario over different periods of time, and correlate the susceptibility estimates, threat scenario, and the different periods of time; [send] the selected training intervention to the user at a first time, corresponding to the associated delivery time, wherein [presenting] the selected training intervention to be presented to the user at the first time comprises presenting the selected training intervention [during] an existing session of the user rather than presenting the selected training session separate from the existing session; schedule deployment of smart phones to a plurality of users including the user and a second user; [present] training intervention to a second user at a second time, later than the first time, wherein the selected training intervention comprises a smart phone security training intervention; and wherein [presenting] the training intervention further comprises initiating an order for fake malware-containing memory devices to be delivered to the second user; receive feedback information indicating interactions with the selected training intervention, wherein future training interventions is based on the feedback information.
Thus, the limitations identified above recite an abstract idea since the limitations correspond to certain methods of organizing human activity, and/or mental processes, which are part of the enumerated groupings of abstract ideas identified according to the current eligibility standard (see MPEP 2106.04(a)). For instance, the current claims correspond to managing personal behavior, such as teaching, wherein the susceptibility or weakness of a user to one or more threats (e.g., a first threat, and/or a second threat) is determined based on the analysis of collected data—such as: historical data of the user, and observed interactions/actions of the user, wherein the training intervention and the associated delivery time are selected using a quantitative model; and thereby, based on the amount of time available to train the user, the user is provided with a relevant training in order to manage or improve the user’s condition; and furthermore, based on scheduled data related to deployment of smart phones to a plurality of users including the user and a second user; training intervention regarding smart phone security is presented to a second user at a second time, later than the first time; and furthermore, the quantitative model above is updated based on feedback information received regarding interactions with the selected training intervention, etc.
Similarly, given the limitations that recite the process of determining the susceptibility of the user to a threat (e.g., a fist threat, a second threat) based on the analysis of collected data (e.g., the user’s interactions/actions, and the user’s historical data), including: performing a comparison process in order to determine that the user is more susceptible to a first threat; determining an amount of time available to train the user; identifying, based on a quantitative model formulated by applying statistical analysis and data mining to historical data, susceptibility estimates that comprises probabilities that a given user will fall victim to a given threat scenario over different periods of time; correlating the susceptibility estimates, the given threat scenarios, and the different periods of time; determining a user input comprises scheduling deployment of smartphones to a plurality of users including the user and a second user, etc., the claims also correspond to mental processes; such as, an evaluation, an observation and/or a judgment process, etc.
(Step 2A) [Wingdings font/0xE0] Prong-Two:
The claims recite additional elements, wherein an electronic device, one or more processors, or a host computer with a processor and a non-transitory memory, near field communication tags or radio frequency identification tags, etc., are utilized to facilitate the recited functions regarding: collecting data related to a user (e.g., “receiving historical data for a user”; “receiving sensor data indicating a plurality of sensed actions of the user, wherein the sensor data indicates at least a first cybersecurity threat and a second cybersecurity threat, and wherein receiving the sensor data comprises receiving data from one or more of physical sensors or software-implemented sensors configured to detect whether a malware program is installed at or whether a storage device is inserted into the electronic device”); making one or more determinations based on the analysis of the collected data (e.g., “using the historical data and the sensor data to determine a first susceptibility of the user . . . a second susceptibility of the user to the second cybersecurity threat”; “determining, based on the first susceptibility . . . the second susceptibility . . . that the user is more susceptible to the first cybersecurity threat”; “determining an amount of time available to train the user . . . a single training intervention is sufficient to train the user”); selecting, using a trained algorithm/model, a content item(s) or training based on one or more results determined above (“based on the determination . . . selecting a training intervention . . . and wherein the training intervention comprises an interactive training intervention wherein the training intervention and an associated delivery time are selected using a quantitative training needs model . . . configured by applying statistical analysis and data mining . . . storing correlations . . . wherein causing the selected training intervention to be presented to the user at the first time comprises presenting the selected training intervention within an existing software program session of the user rather than presenting the selected training intervention within an interactive training module, separate from the existing software program session and known to present mock attack situations rather than actual attack situations ”); transmitting the content item(s) to an electronic device for presentation to the user (e.g., transmit the selected training intervention to an electronic device, thereby presenting the training intervention to the user); collecting further input (e.g., “detecting a first sensed action comprising a user input scheduling deployment of smart phones to a plurality of users including the user and a second user”); and presenting one or more further content items (e.g., “causing the selected training intervention to be presented to a second user at a second time, later than the first time, wherein the selected training intervention comprises a smart phone security training intervention . . . causing a device of the second user to initiate a download of mock malware of the device of the second user using mock malicious short range tags that, when read by the device of the second user, cause the device of the second user to download the mock malware . . . automatically initiating an order for fake malware-containing memory devices to be delivered to the second user ”); collecting feedback information and applying the feedback for future presentation of content (e.g., “receiving feedback information from the electronic device and the device of the second user indicating interactions with the selected training intervention, wherein selection of future training interventions is based on the feedback information”), etc.
Each of the current claims recites the use of a sensor and a voice recognition technology. However, given the high level of generality of these devices/sensors, the claimed method/system is utilizing the existing technology merely for data gathering purpose; and this corresponds to an insignificant extra-solution activity.
Accordingly, the claimed additional elements fail to integrate the abstract idea into a patent-eligible practical application since the additional elements are utilized merely as a tool to facilitate the abstract idea. In particular, when each claim is considered as a whole, the additional elements fail to integrate the abstract idea into a patent-eligible practical application since they fail to impose meaningful limits on practicing the abstract idea. When each of the claims is considered as a whole, none of the claims provides a technological improvement over the relevant existing technology.
Thus, the above confirms that the claims are indeed directed to an abstract idea.
(Step 2B)
Accordingly, when the claim(s) is considered as a whole (i.e., considering all claim elements both individually and in combination), the claimed additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to “significantly more” than the abstract idea itself (also see MPEP 2106).
The additional elements are directed to conventional computer elements, which are serving merely to perform conventional computer functions. Accordingly, none of the current claims recites an element—or a combination of elements—directed to an inventive concept.
It is worth noting, per the original disclosure, the current claimed invention is directed to a conventional and generic arrangement of the additional elements. For instance, the disclosure describes one or more commercially available conventional computing devices (e.g., smartphones, laptop computers, tablet computers, etc.), which are utilized to provide one or more trainings to a user based on the analysis of information collected regarding the user ([0033] to [0036]).
It is further worth noting that the utilization of the conventional computer/network technology to facilitate training, such as (i) analyzing collected information related to a user in order to identify one or more deficiencies, and thereby (ii) generating one or more pertinent trainings to the user in order to remedy the identified deficiencies, etc., is already directed to a well-understood, routine or conventional activity in the art (e.g. see US 2006/0127871; US 2008/0114709; 2003/0129575; US 2006/0047544, etc.).
Of course, it is also part of the conventional computer/network technology to utilize: (a) a voice recognition technology to determine whether a user is providing sensitive information during interaction (e.g., see US 2004/0208307: [0025], [0053]); and also (b) a machine-learning algorithm(s) to select a relevant training material(s) to a user based on the analysis of data gathered regarding the user (e.g. US 2008/0286737; US 2006/0233346; US 2006/0166174, etc.).
The above observation confirms that the current claimed invention fails to amount to “significantly more” than an abstract idea.
It is also worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 25-30, 32-35, 38, 39, 44 and 46-49). Particularly, each of the dependent claims also fails to amount to “significantly more” than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element(s) utilized to facilitate the abstract idea.
Thus, none of the current claims, when considered as a whole, implements an element—or a combination of elements—directed to an inventive concept (e.g., no claim element—or a combination of claim elements—that provides a technological improvement over the conventional computer/network technology).
► Applicant’s arguments directed to section §101 have been fully considered (the arguments filed on 12/18/2025). However, the Office respectfully disagrees with Applicant’s arguments at least for the following reasons:
Firstly, while referring to the Office’s findings under prong-one of Step 2A, Applicant asserts that “the claims, as amended, are not directed to a method of organizing human activity or mental process. Rather, claim 24, as amended, is more complex than merely reciting the performance of ‘managing personal behavior,’ and is better understood as being necessarily rooted in computer technology in order to solve a specific problem in the realm of cyber security” (emphasis added).
However, neither the alleged complexity of the claim, nor the alleged “specific problem” that the claim is assumed to be solving in the realm of cybersecurity, shields the claim from the abstract idea that it is reciting. It is worth to note that prong-one of Step 2A does not consider any of the computer elements that the claim is reciting. Instead, regardless of the computer elements being claimed, and/or the alleged complexity of the claim, prong-one merely requires one to identify the judicial exception (e.g., the abstract idea) that the claim is reciting; MPEP 2106.07(a) (emphasis added),
For Step 2A Prong One, the rejection should identify the judicial exception by referring to what is recited (i.e., set forth or described) in the claim and explain why it is considered an exception. For example, if the claim is directed to an abstract idea, the rejection should identify the abstract idea as it is recited (i.e., set forth or described) in the claim and explain why it is an abstract idea.
Thus, even assuming arguendo that claim 24 (or any of the current claims) is a complex claim that purportedly solves a specific problem (i.e., providing a user with training regarding cybersecurity, so that the user’s vulnerability to one or more cyberthreats is minimized), the above does not necessarily imply that the claim is not reciting any abstract idea. This is because one has to evaluate the limitations that the claim is reciting in order to determine whether or not the claim is reciting an abstract idea. For instance, the claim recites limitations that specify the process of collecting information regarding the user—namely, the user’s historical data and one or more actions that the user has performed; and thereby, the user’s susceptibility to one or more threats (e.g., a first threat, a second threat) is determined. The above is indeed an abstract idea; for example, an evaluation, an observation and/or a judgment process that can be performed in the human mind and/or using a pen and paper (e.g., per the abstract idea group mental processes). This is because a human—such as a supervisor—can not only evaluate the historical data of the user (e.g., a record in the user’s file/catalog), but also observe one or more actions that the user is performing; and thereby, the supervisor makes a judgment regarding the vulnerability of the user to one or more threats.
The above confirms that the claims indeed recite an abstract idea. Of course, the above is just an exemplary scenario since similar type of analysis applies to the rest of the limitations.
In addition, while referring to one of the memorandums (e.g., the December 2025 Memo), Applicant is asserting that “Claim 1 is directed to a particular method of configuring and applying a quantitative training needs model, which may be used to dynamically optimize selection of mock cyber attacks and training interventions. For example, the model relies on training data corresponding to susceptibility estimates, threat scenarios, and various periods of time (among other information). The model may continue to incorporate user feedback into the selection of future mock attacks/training interventions . . . the claimed features provide a technical solution to a technical problem associated with cybersecurity risks/vulnerabilities associated with particular individuals (which e.g., provide vulnerabilities/points of entry for cyber attacks within a corresponding system). For example, these features provide: 1) an automated assessment of cybersecurity risk, 2) system-generated selections of security interventions based on these assessments, and 3) integration of these operations into a computerized security platform” (emphasis added).
However, neither Applicant’s summary regarding the recent memorandum, nor Applicant’s assertions regarding the claimed “quantitative training needs model”, is sufficient to demonstrate whether any of the current claims is “significantly more” than an abstract idea. It is worth to note that the recent memorandum (the December 2025 Memo) does not change any of the eligibility analysis. This means the current eligibility analysis (e.g., the Alice/Mayo framework) still remains intact. In addition, unlike Applicant’s assertion, the claimed (and disclosed) “quantitative training needs model” is not necessarily a machine-learning model; rather, it is a rule that associates a set of parameters (e.g., a threat scenario, a user action, the frequency of user action) to one or more corresponding outcomes (e.g., degree of susceptibility, cost, type of training needed), etc. (e.g., see FIG 5, FIG 12). Thus, an authorized individual (e.g., a supervisor or an administrator) drafts such table/rule that correlates different parameters with one or more corresponding outcomes; so that the system utilizes the table to determine, based on monitoring the actions that the user is performing, one or more types of threats that user is facing (e.g., the vulnerability of the user to one or more threats); and thereby, it identifies—from the table—the relevant training that is already assigned to the type of threat determined, etc. Thus, the claimed/disclosed “quantitative training needs model” is merely a tool (e.g., a table) that is being utilized to identify the most relevant training material for the user. Consequently, even assuming arguendo that the claimed (and/or the disclosed) “quantitative training needs model” is a machine-learning model, this is still not sufficient to demonstrate whether any of the current claims is “significantly more” than an abstract idea. This is because the use of one or more algorithms, including one or more machine-learning models, to (i) analyze collected data related to a user (e.g., the user’s schedule, the user’s skill level, the user’s activity, etc.), and (ii) select and/or present one or more pertinent content items to the user (e.g., pertinent training materials; pertinent entertainment materials), etc., is already part of the existing computer/network technology. Thus, given the lack of technological improvement per the claimed (and the disclosed) system/method, none of the claims—considered as a whole—integrates the abstract idea into a patent-eligible practical application (per prong-two of Step 2A). So far, except for describing the alleged configuration and/or purpose of the claimed (or the disclosed) “quantitative training needs model”, Applicant does not demonstrate whether the model alone—or in combination with other claimed/disclosed features—is providing any technological improvement over the existing computer/network technology. Consequently, Applicant’s arguments are not persuasive.
Secondly, while attempting to challenge the Office’s analysis presented in the previous office-action (e.g. part of the analysis on pages 11 and 12 of the office-action dated 10/02/2025), Applicant is asserting that the above “improperly shifts the analysis away from the claimed system, which provides an automated system for assessing and addressing particular cybersecurity vulnerabilities for an enterprise system, automatically selecting training interventions based on this assessment, and integrating these interventions into the system accordingly, and instead shifts the analysis toward an alleged improvement in human capability. This mischaracterization abstracts away the claimed technical mechanisms, and is inconsistent with both the December 2025 Memo and Enfish” (emphasis added).
Although it is unclear why Applicant has replaced the actual term, “technological”, with the term “technical”, when attempting to quote one of the lines from the previous office-action (i.e., page 11, last paragraph), Applicant’s assertion above still does not appear to be valid. In particular, besides being inconsistent with the facts presented in the previous office-action, Applicant fails to provide any rationale and/or evidence to substantiate the alleged “shift” from the claimed system. Note that similar to the current argument, Applicant previously argued that “claim 24, as amended, is more complex than merely reciting the performance of 'managing personal behavior,' and is better understood as being necessarily rooted in computer technology in order to solve a specific problem in the realm of cyber security . . . the claims are directed to a particular method of training, applying, and refining a quantitative training needs model, which may be used to dynamically optimize selection of mock cyber attacks and training interventions” (see page 11 of Applicant’s response filed on 06/13/2025, emphasis added).
Of course, in response to the above argument, the Office presented the following analysis in the previous office-action (again see pages 11 and 12 of the office-action dated 10/02/2025, emphasis added),
“ However, the alleged ‘specific problem in the realm of cyber circuity’ has nothing to do with solving a technological problem (if any) that the existing computer/network technology if facing. Instead, it is a scheme intended to train a user regarding cyber threats, so that the user would be vigilant regarding such threats when interacting with one or more online entities . . . the model above also has nothing to do with providing a technological improvement over the existing computer/network technology. Instead, it is a scheme intended to facilitate the selection of the appropriate training material and the suitable training delivery time; so that, the user would be presented with the appropriate training material at the appropriate time”
Thus, it is quite evident—at least to one familiar with the eligibility analysis—that the above response does not “shift” away from the claimed system. In contrast, it is specifically directed to the claimed system. In particular, it is pointing out the reason why the so-called “quantitative training needs model” that the claimed system is reciting, and/or the alleged “specific problem in the realm of cyber circuity” that the claimed system is assumed to be solving, fail to demonstrate any technological improvement over the relevant existing technology. Consequently, Applicant’s argument above is not persuasive. If anything, the argument appears to be an excuse to repeatedly mention the recent memorandum (the Memorandum of December 2025); nevertheless, it is again important to note that the recent memorandum does not change the eligibility analysis (i.e., the Alice/Mayo framework is still intact).
In addition, while referring to the current amendment, Applicant asserts, “at least by virtue of the inclusion of newly added features such as ‘automatically initiating an order for fake malware-containing memory devices to be delivered to the second user,’ the claim cannot reasonably be directed to either methods of organizing human activity or a mental process. Rather, at least this feature recites the automated generation and deployment of a trigger signal to initiate (by coordinating with a supplier system) an order for memory devices and automatically arranging their delivery . . . amended claim 24 recites features that, when considered as a whole, integrate any such exception into a practical application and thus render the claim eligible” (emphasis added).
However, here also the process of automatically initiating an order for a product to be delivered, regardless of whether the product is a memory device that contains a fake malware or a useful file, has nothing to do with a technological improvement over the existing computer/network technology. This is again because such process of automatically initiating an order for a given product is already part of the existing computer/network technology (e.g. the Internet technology). In fact, entities that sale products to customers commonly implement such existing technology; so that, based on the analysis of collected data (e.g., inventory data regarding the product, etc.), an electronic order regarding the product is automatically generated/transmitted to the relevant supplier/vendor, etc. (e.g., US 2003/0033205, [0077] lines 1-6; and also see, US 2003/0105722, [0057] lines 1-9; etc.). Thus, here also Applicant appears to be attempting to paint the features of the existing computer/network technology as an advanced technological feature.
It is also noted that Applicant is incorrectly blending the inquiry of prong-one with that of prong-two. Nevertheless, even basic common sense dictates that a technological improvement (if any) is demonstrated by comparing two technologies (e.g., comparing the claimed/disclosed technology with the existing technology). In contrast, Applicant appears to be attempting to demonstrate the alleged technological improvement while comparing the operation of the claimed computer with the abstract idea groups (e.g., certain methods of organizing human activity, and/or a mental process). However, such comparison is not valid to demonstrate a technological improvement (if any) that the claimed (and/or the disclosed) system/method is allegedly providing. Moreover, when evaluating whether a given claim is reciting an abstract idea (e.g., a mental process, and/or certain methods of organizing human activity), the test does not require one to consider any of the computer elements, including the operation that Applicant is alleging, “automated generation and deployment of a trigger signal to initiate (by coordinating with a supplier system) an order” (emphasis added). This is again because none of the claimed (or disclosed) computer elements is part of the abstract idea; rather, the computer elements are part of the additional elements. Thus, Applicant’s attempt to challenge the Office’s findings, while repeatedly blending the computer elements with the abstract idea, is not persuasive.
Applicant further asserts, “claim 24 recites features that, when considered as a whole, integrate any such exception into a practical application . . . claim 24 recites a specific series of computer-executed instructions . . . receiving historical data . . . receiving sensor data . . . using the historical data and the sensor data to determine a first susceptibility of the user . . . determining, based on the first susceptibility of the user to
the first cybersecurity threat . . . determining an amount of time available . . . based on the determination that the user is more susceptible to the first cybersecurity threat and the amount of time available to train the user, selecting a training intervention . . . applying statistical analysis and data mining to historical data . . . storing correlations between the susceptibility estimates . . . causing the selected training intervention to be presented to the user . . . detecting a first sensed action . . . causing the selected training intervention to be presented to a second user . . . causing a device of the second user to initiate a download of mock malware . . . automatically initiating an order for fake malware-containing memory devices . . . receiving feedback information . . . wherein selection of future training interventions is based on the feedback . . . These features would have improved the relevant technology that existed at the time of the effective filing date of the instant application, which leads to the conclusion that the claim is eligible” (emphasis added).
However, except for listing the limitations that claim 24 is reciting, Applicant fails to identify an element (if any)—or a combination of elements (if any)—that provides a technological improvement over the relevant existing technology. In particular, given the claimed (and the disclosed) technology, an integration (if any) of the abstract idea into a patent-eligible practical application is demonstrated when the claimed system/method is implementing an element—or a combination of elements—that provides a technological improvement over the existing computer/network technology. In contrast, the claimed (and the disclosed) system/method is utilizing the existing computer/network technology (e.g., the existing Internet technology)—merely as a tool—to facilitate an abstract idea; such as, presenting a user(s) with pertinent training materials based on the analysis of data collected regarding the user(s), etc. (e.g., see above the abstract idea identified under prong-one of Step 2A). Consequently, none of the conclusory assertions that Applicant is making, including “an automated system for assessing and addressing particular cybersecurity vulnerabilities for an enterprise system, automatically selecting training interventions based on this assessment, and integrating these interventions into the system accordingly, which thus reduces susceptibility of the corresponding enterprise system to cyber threats/vulnerabilities”, “imposing a meaningful limit on the scope of amended independent claim 24, such that amended independent claim 24 amounts to more than a drafting effort to monopolize any such ‘certain method of organizing human activity’ or ‘mental process.’”, etc., is persuasive.
Thirdly, regarding Step 2B, Applicant is asserting that “[t]he rejection of the claims does not support the conclusion that the actual language of the claims, or the claims as a whole, recite a ‘combination of elements [that represent] well-understood, routine, conventional activity [ ... ] widely prevalent or in common use in the relevant industry.’ MPEP 2106.05 . . . This is further emphasized in BASCOM Global Internet Servs. v. AT&T Mobility LLC . . . Even if the claimed elements are considered to be generic computer network components, similar to the combination in Bascom, these elements amount to significantly more because of their non-conventional and non-generic arrangement, which provides a technical improvement . . . The meaningful limitations of claim 24 satisfy the considerations of MPEP 2106.05(e) by using a number of elements to create a unique combination of features that is integrated into a practical application, namely providing an automated system for assessing and addressing particular cybersecurity vulnerabilities for an enterprise system, automatically selecting training interventions based on this assessment, and integrating these interventions into the system accordingly, which thus reduces susceptibility of the corresponding enterprise system to cyber threats/vulnerabilities” (emphasis added).
However, Applicant appears to fail to properly construe the inquiry established per Step 2B. Note that when evaluating whether the current claimed system/method is directed to a well-understood, routine, conventional activity (hereinafter WRCA), the test does not consider the new abstract idea that the claim is reciting. Instead, while considering the claim as a whole, it evaluates whether the claimed system/method is beyond the conventional and generic arrangement of the additional elements. Thus, Applicant’s alleged “actual language of the claims” is not even relevant, much less sufficient to challenge the Office’s finding under Sep 2B. This is again because the WRCA test does not rely on the new abstract idea, which is part of the actual claim language. In the instant case, given the fact that the claimed (and the disclosed) system/method is directed to the conventional and generic arrangement of the additional elements, each of the current claims—when considered as a whole—is indeed directed to WRCA (e.g., the conventional Internet technology, wherein two or more electronic devices, including sensors, communicate with one another over the conventional communication network, etc., and thereby it facilitates the process of evaluating data related to the user, including the process of presenting pertinent content to the user, etc., e.g., see the arrangement per current claim 24).
In addition, similar to the point made in the previous office-action, the WRCA test further relies on the technology disclosed in the specification (MPEP 2106.05(d)(I)(2), emphasis added),
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry . . . Intellectual Ventures v. Symantec, 838 F.3d 1307, 1317; 120 USPQ2d 1353, 1359 (Fed. Cir. 2016) (“The written description is particularly useful in determining what is well-known or conventional”) . . . TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as “either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.”). As such, an examiner should determine that an element (or combination of elements) is well-understood, routine, conventional activity only when the examiner can readily conclude, based on their expertise in the art, that the element is widely prevalent or in common use in the relevant industry.
Thus, it is quite evident—at least per the excerpt above—that the WRCA test is focusing on the technology, as opposed to the new abstract idea, that the claims are reciting.
Note also that despite alleging a “technical improvement” and/or “a unique combination of features that is integrated into a practical application”, Applicant fails to substantiate whether the alleged “technical improvement”, and/or the alleged “unique combination of features”, is directed to a technological improvement over the existing computer/network technology. Instead, Applicant appears to be listing the alleged objectives of the claimed system and/or the steps being performed (e.g., providing an automated system for assessing and addressing particular cybersecurity vulnerabilities for an enterprise system; automatically selecting training interventions based on this assessment, etc.). Accordingly, none of Applicant’s assertions, alone or in combination, demonstrates any technological improvement over the relevant existing technology. Consequently, Applicant’s arguments are not persuasive.
Thus, at least for the reasons above, the Office concludes that each of the current claims—when considered as a whole—fails to implement an inventive concept that amounts to “significantly more” than an abstract idea.
Claim Rejections - 35 USC § 112
4. The following is a quotation of the first paragraph of 35 U.S.C.112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C.112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
● Claim 48 is rejected under 35 U.S.C.112(a) or 35 U.S.C.112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
It is worth to note that the finding presented in the previous office-action, under section §112(a), was applicable to all of the claims, including claim 48 since claim 48 is dependent on claim 24.
The current amendment to claims 24 and 36 already removes, as applied to the majority of the claims (i.e., claims 24-30, 32-36, 38, 39, 44, 46, 47 and 49), the previous limitation that introduced the new subject matter; namely the limitation directed to training “the quantitative training needs model” (e.g., see lines 25-26 of claim 24).
However, current claim 48 still involves the same new subject matter identified above, which is directed to training “the quantitative training needs model” (see lines 1 and 2 of claim 48).
In contrast, as already pointed out in the previous office-action, the original disclosure does not appear to have a written description regarding such machine-learning, which trains the “the quantitative training needs model”, as current claim 48 is attempting to portray. In particular, similar to the point made in the previous office-action, the “quantitative training needs model”—per the original disclosure—is directed to a predetermined rule set, which associates a set of parameters (e.g., a threat scenario, a user action, the frequency of user action) to one or more corresponding outcomes (e.g., a degree of susceptibility, cost, type of training needed), etc. (e.g., see FIG 5, FIG 12). Thus, the claimed (and disclosed) “quantitative training needs model” does not correspond to a machine-learning model that the system trains.
Note that, when an amendment is filed in reply to an objection or rejection based on 35 U.S.C. 112(a), or first paragraph (pre-AIA ), a study of the entire application is often necessary to determine whether or not "new matter" is involved. Applicant should therefore specifically point out the support for any amendments made to the disclosure (see MPEP 2163.06).
5. The following is a quotation of 35 U.S.C.112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C.112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
● Claim 48 is rejected under 35 U.S.C.112(b), or second paragraph (pre-AIA ), as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
Claim 48 is dependent on claim 24; and claim 48 recites, “wherein training the quantitative training needs model further comprises training the quantitative training needs model . . . a recipient of the selected training intervention” (emphasis added).
However, the training implied above lacks sufficient antecedent basis since current claim 24 does not involve any training of the “quantitative training needs model”; and therefore, claim 24 is ambiguous at least for the reason above.
Prior Art
● Considering each claim as a whole (e.g., see each of the independent claims), the prior art does not teach or suggest the claims as currently presented (regarding the state of the prior art, see the office-action dated 03/13/2024).
Conclusion
6. Applicant’s amendment necessitated the new grounds of rejection presented in this final office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filled within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUK A GEBREMICHAEL whose telephone number is (571) 270-3079. The examiner can normally be reached on 7:00AM-3:00PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID LEWIS can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRUK A GEBREMICHAEL/Primary Examiner, Art Unit 3715