Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/19/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-7, 11, 14-17, 21, and 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Scheidler et al (10681060) in views of Applicant IDS’s Soni et al (2024/0171451), Humphrey et al (2019/0260764) and Kirti et al (2017/0251013).
For claim 1, Scheidler teaches a computer-implemented method executed on a computing device (abstract, lines 1-3) comprising: monitoring activity within a computing platform (col.11, lines 55-60)(the platform of BSP that includes the activity of the users that are monitored and analyzed based on their activities as shown in fig.4) including monitoring activity with respect to the computing platform (Scheidler teaches that Security team Security operations center (SoC) operators: L1 person, handling tickets, alerts Security investigator: stronger security skills, like sec. L2 Security architect: responsible for org. security architecture Security operation engineer: responsible for running sec apps Chief Information Security Officer (CISO), the boss Other stakeholders System administrator, operators: not really daily users, but responsible for apps and surrounding infrastructure External auditor as Scheidler teaches in col.10, lines 25-40 and col.40, lines 10-20), thus defining monitored activity (col.10, lines 45-60); associating the monitored activity with a user of the computing platform, thus defining an associated user (col.58, lines 65-68 to col.59, lines 1-5); and assigning a risk level to the monitored activity (each activity are assigned to risk level) to determine if such monitored activity is indicative of a security event (analyzing each activity and determining the risk levels of each activity) (col.2, lines 13-20 and 26-40), determining that the monitored activity is indicative of a security event, generating an initial notification (such as alerts on the user interface to display) of the security event, wherein the initial notification includes a computer-readable language portion (the user interface to display text for the user to read the messages) that defines one or more specifics of the security event (Scheidler, col.5, lines 49-50 and col.19, lines 50-60), iteratively (defined as continuously present application in par.0058 on specification) processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65).
Scheidler fails to teach by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform, wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts; and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes, wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Soni teaches, similar art, that by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform (Soni teaches that Analytics applications today already collect many types of telemetry data from the devices in the fabric, when an analytics application 312 is is deployed at customers 310 (e.g., “Customer 1”, “Customer 2”, etc.) in such fabrics, these analytics applications can export these elements of high-level data (parameters), i.e., telemetry data 320 corresponding to the above criteria, to a cloud instance 330 (e.g., analytics application 322 of the cloud instance) as Soni teaches in par.37 and 62), wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts (Soni teaches that where such data 320 also includes the events/alerts high-level information from the analytics application 312 the system experiences during such triggers. For example, such an alert may contain information such as “severity” (e.g., major, minor, critical, warning, etc.), “category” (e.g., interface, node, etc.), “title” (e.g., “interface down”, “delay”, etc.), “count” (e.g., number of occurrences of the alert), “description” (e.g., “interface is down”, “max delay exceeded”, “packet drops sharply rising”, etc.), and so on as Soni teaches in par.62); and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes (Soni teaches that analytics application that already is monitoring devices. This analytics application already has an alert raising system in place which raises alerts for interfaces, devices going down, flows getting dropped, endpoints moving, etc. Accordingly, when everything is “up and running” in a cluster, the system is clean and likely there are not many alerts, the number of alerts raised during that time are sent to the microservice running in the cloud. This can be termed as a training data set for the ML algorithm as Soni teaches in par.79-80). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler to include plurality of agents deployed across the computing platform include software components as taught and suggested by Soni for purpose of causing the management device to distinguish between the set of expected reactions and any unexpected events during the disruptive activity (Soni, abstract). Scheidler, as modified by Soni, do not explicity teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Humphrey teaches, similar system, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event (examiner notes that Humphry teaches possibly recommend actions or take actions autonomously in response to this threat which is security event which covers the limitation of recommended actions responsive to the security event as Humphrey teaches in par.86) and automatically performing one or more investigative operations concerning the associated user with respect to the security event (examiner notes that Humphry teaches of doing analytics investigation of events automatically thru intelligent machine as Humphrey teaches in par.86, lines 1-6 and par.89, lines 1-10). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler, as modified by Soni, to include automatically performing one or more investigative operations as taught and suggested by Humphrey for purpose of improving investigation efficiency and guide human users, increasing the efficiency of a human cyber security analyst and providing key investigation information to human operators (Humphrey, par.86). Scheidler, as modified by Soni and Humphrey, do not explicity teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user.
Kirti teaches, similar system, wherein the assigned risk level is based, at least in part, upon an identity of the associated user (Kirti teaches that Activity data can include the user account or other user identifier for the user associated with the events or statistics as Kirti teaches in par.102, and par.115-116). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler, as modified by Soni and Humphrey, to include upon an identity of the associated user as taught and suggested by Kirti for purpose of determining a threat of security posed by the application based on use of the application and managing access to applications for minimizing security threats and risks in a computing environment of the organization (Kirti, abstract).
For claim 4, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 1. Scheidler further teaches wherein the computing platform includes a plurality of security-relevant subsystems (systems such as database servers, web-server, Initial data import, and automatic job setup) (Scheidler col.6, lines 45-68 , col.11, lines 55-68 to col.12, lines 1-15).
For claim 5, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 1. Scheidler further teaches wherein monitoring activity within a computing platform includes; monitoring activity within one or more of the plurality of security-relevant subsystems of the computing platform (systems monitors the user activities in the database servers, for analytics) (Scheidler col.6, lines 45-68 and col.12, 10-30).
For claim 6, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 1. Scheidler further teaches wherein iteratively processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (col.8, lines 9-20 and col.19, lines 50-65) includes: iteratively processing the initial notification using the generative AI model (machine learning), the formatting script and/or one or more tools (using user interface to show visualizing messaging alerts) to produce the summarized human- readable report for the initial notification (Scheidler col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65).
For claim 7, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 1. Scheidler further teaches wherein the one or more tools includes one or more of: a decoding tool to decode an encoded initial notification; a decompression tool to decompress a compressed initial notification; and an identification tool to identify an owner of a domain associated with the initial notification (the security blindshpotter using tool to identify) (Scheidler col.5,lines 10-15, and col.20, lines 34-50).
For claim 11, Scheidler teaches a computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor (col.6, lines 3-10), cause the processor to perform operations (abstract, lines 1-3) comprising: monitoring activity within a computing platform (col.11, lines 55-60)(the platform of BSP that includes the activity of the users as shown in fig.4) including monitoring activity with respect to the computing platform (Scheidler teaches that Security team Security operations center (SoC) operators: L1 person, handling tickets, alerts Security investigator: stronger security skills, like sec. L2 Security architect: responsible for org. security architecture Security operation engineer: responsible for running sec apps Chief Information Security Officer (CISO), the boss Other stakeholders System administrator, operators: not really daily users, but responsible for apps and surrounding infrastructure External auditor as Scheidler teaches in col.10, lines 25-40 and col.40, lines 10-20), thus defining monitored activity (col.10, lines 45-60); associating the monitored activity with a user of the computing platform, thus defining an associated user (col.58, lines 65-68 to col.59, lines 1-5); and assigning a risk level to the monitored activity to determine if such monitored activity is indicative of a security event (analyzing the activity level)(col.2, lines 13-20 and 26-40), determining that the monitored activity is indicative of a security event, generating an initial notification (such as alerts on the user interface to display) of the security event, wherein the initial notification includes a computer-readable language portion (the user interface to display text for the user to read the messages) that defines one or more specifics of the security event (Scheidler, col.5, lines 49-50 and col.19, lines 50-60), iteratively (defined as continuously present application in par.0058 on specification) processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65).
Scheidler fails to teach by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform, wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts; and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes, wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Soni teaches, similar art, that by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform (Soni teaches that Analytics applications today already collect many types of telemetry data from the devices in the fabric, when an analytics application 312 is is deployed at customers 310 (e.g., “Customer 1”, “Customer 2”, etc.) in such fabrics, these analytics applications can export these elements of high-level data (parameters), i.e., telemetry data 320 corresponding to the above criteria, to a cloud instance 330 (e.g., analytics application 322 of the cloud instance) as Soni teaches in par.37 and 62), wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts (Soni teaches that where such data 320 also includes the events/alerts high-level information from the analytics application 312 the system experiences during such triggers. For example, such an alert may contain information such as “severity” (e.g., major, minor, critical, warning, etc.), “category” (e.g., interface, node, etc.), “title” (e.g., “interface down”, “delay”, etc.), “count” (e.g., number of occurrences of the alert), “description” (e.g., “interface is down”, “max delay exceeded”, “packet drops sharply rising”, etc.), and so on as Soni teaches in par.62); and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes (Soni teaches that analytics application that already is monitoring devices. This analytics application already has an alert raising system in place which raises alerts for interfaces, devices going down, flows getting dropped, endpoints moving, etc. Accordingly, when everything is “up and running” in a cluster, the system is clean and likely there are not many alerts, the number of alerts raised during that time are sent to the microservice running in the cloud. This can be termed as a training data set for the ML algorithm as Soni teaches in par.79-80). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler to include plurality of agents deployed across the computing platform include software components as taught and suggested by Soni for purpose of causing the management device to distinguish between the set of expected reactions and any unexpected events during the disruptive activity (Soni, abstract). Scheidler, as modified by Soni, do not explicity teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Humphrey teaches, similar system, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event (examiner notes that Humphry teaches possibly recommend actions or take actions autonomously in response to this threat which is security event which covers the limitation of recommended actions responsive to the security event as Humphrey teaches in par.86) and automatically performing one or more investigative operations concerning the associated user with respect to the security event (examiner notes that Humphry teaches of doing analytics investigation of events automatically thru intelligent machine as Humphrey teaches in par.86, lines 1-6 and par.89, lines 1-10). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler, as modified by Soni, to include automatically performing one or more investigative operations as taught and suggested by Humphrey for purpose of improving investigation efficiency and guide human users, increasing the efficiency of a human cyber security analyst and providing key investigation information to human operators (Humphrey, par.86). Scheidler, as modified by Soni and Humphrey, do not explicity teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user.
Kirti teaches, similar system, wherein the assigned risk level is based, at least in part, upon an identity of the associated user (Kirti teaches that Activity data can include the user account or other user identifier for the user associated with the events or statistics as Kirti teaches in par.102, and par.115-116). It would have been obvious to one ordinary skill in the art before effective filling date to modify ). Scheidler, as modified by Soni and Humphrey, to include upon an identity of the associated user as taught and suggested by Kirti for purpose of determining a threat of security posed by the application based on use of the application and managing access to applications for minimizing security threats and risks in a computing environment of the organization (Kirti, abstract).
For claim 12, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches further comprising: if such monitored activity is indicative of a security event, generating an initial notification (such as alerts on the user interface to display) of the security event, wherein the initial notification includes a computer-readable language portion (the user interface to display text for the user to read) that defines one or more specifics of the security event (Scheidler col.5, lines 49-50 and col.19, lines 50-60).
For claim 13, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches further comprising: iteratively (defined as continuously present application in par.0058 on specification) processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65).
For claim 14, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches wherein the computing platform includes a plurality of security-relevant subsystems (systems such as database servers, web-server, Initial data import, and automatic job setup) (Scheidler col.6, lines 45-68 , col.11, lines 55-68 to col.12, lines 1-15).
For claim 15, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches wherein monitoring activity within a computing platform includes; monitoring activity within one or more of the plurality of security-relevant subsystems of the computing platform (systems monitors the user activities in the database servers, for analytics) (Scheidler col.6, lines 45-68 and col.12, 10-30).
For claim 16, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches wherein iteratively processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (col.8, lines 9-20 and col.19, lines 50-65) includes: iteratively processing the initial notification using the generative AI model (machine learning), the formatting script and/or one or more tools (using user interface to show visualizing messaging alerts) to produce the summarized human- readable report for the initial notification (Scheidler col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65).
For claim 17, Scheidler in view of Soni, Humphrey and Kirti discloses the method of claim 11. Scheidler further teaches wherein the one or more tools includes one or more of: a decoding tool to decode an encoded initial notification; a decompression tool to decompress a compressed initial notification; and an identification tool to identify an owner of a domain associated with the initial notification (the security blindshpotter using tool to identify) (Scheidler col.5,lines 10-15, and col.20, lines 34-50).
For claim 21, Scheidler teaches a computing system including a processor (col.6, lines 4-6) and memory configured to perform operations (abstract, lines 1-3) comprising: monitoring activity within a computing platform (col.11, lines 55-60)(the platform of BSP that includes the activity of the users as shown in fig.4) including monitoring activity with respect to the computing platform (Scheidler teaches that Security team Security operations center (SoC) operators: L1 person, handling tickets, alerts Security investigator: stronger security skills, like sec. L2 Security architect: responsible for org. security architecture Security operation engineer: responsible for running sec apps Chief Information Security Officer (CISO), the boss Other stakeholders System administrator, operators: not really daily users, but responsible for apps and surrounding infrastructure External auditor as Scheidler teaches in col.10, lines 25-40 and col.40, lines 10-20), thus defining monitored activity (col.10, lines 45-60); associating the monitored activity with a user of the computing platform, thus defining an associated user (col.58, lines 65-68 to col.59, lines 1-5); and assigning a risk level to the monitored activity to determine if such monitored activity is indicative of a security event (analyzing the activity level), wherein the assigned risk level is based, at least in part, upon the associated user (col.2, lines 13-20 and 26-40), determining that the monitored activity is indicative of a security event, generating an initial notification (such as alerts on the user interface to display) of the security event, wherein the initial notification includes a computer-readable language portion (the user interface to display text for the user to read the messages) that defines one or more specifics of the security event (Scheidler, col.5, lines 49-50 and col.19, lines 50-60), iteratively (defined as continuously present application in par.0058 on specification) processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65).
Scheidler fails to teach by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform, wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts; and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes, wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Soni teaches, similar art, that by one or more security-relevant subsystems via a plurality of agents deployed across the computing platform (Soni teaches that Analytics applications today already collect many types of telemetry data from the devices in the fabric, when an analytics application 312 is is deployed at customers 310 (e.g., “Customer 1”, “Customer 2”, etc.) in such fabrics, these analytics applications can export these elements of high-level data (parameters), i.e., telemetry data 320 corresponding to the above criteria, to a cloud instance 330 (e.g., analytics application 322 of the cloud instance) as Soni teaches in par.37 and 62), wherein the plurality of agents deployed across the computing platform include software components configured to one or more of: monitor network traffic; detect anomalies; log network activity; and generate alerts (Soni teaches that where such data 320 also includes the events/alerts high-level information from the analytics application 312 the system experiences during such triggers. For example, such an alert may contain information such as “severity” (e.g., major, minor, critical, warning, etc.), “category” (e.g., interface, node, etc.), “title” (e.g., “interface down”, “delay”, etc.), “count” (e.g., number of occurrences of the alert), “description” (e.g., “interface is down”, “max delay exceeded”, “packet drops sharply rising”, etc.), and so on as Soni teaches in par.62); and wherein at least a portion of the plurality of agents are trained on a data repository of activities within the computing platform and associated outcomes (Soni teaches that analytics application that already is monitoring devices. This analytics application already has an alert raising system in place which raises alerts for interfaces, devices going down, flows getting dropped, endpoints moving, etc. Accordingly, when everything is “up and running” in a cluster, the system is clean and likely there are not many alerts, the number of alerts raised during that time are sent to the microservice running in the cloud. This can be termed as a training data set for the ML algorithm as Soni teaches in par.79-80). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler to include plurality of agents deployed across the computing platform include software components as taught and suggested by Soni for purpose of causing the management device to distinguish between the set of expected reactions and any unexpected events during the disruptive activity (Soni, abstract). Scheidler, as modified by Soni, do not explicitly teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event and automatically performing one or more investigative operations concerning the associated user with respect to the security event.
Humphrey teaches, similar system, generating one or more of recommended next steps for additional investigation of the security event, recommended actions responsive to the security event and disclaimers regarding the security event (examiner notes that Humphry teaches possibly recommend actions or take actions autonomously in response to this threat which is security event which covers the limitation of recommended actions responsive to the security event as Humphrey teaches in par.86) and automatically performing one or more investigative operations concerning the associated user with respect to the security event (examiner notes that Humphry teaches of doing analytics investigation of events automatically thru intelligent machine as Humphrey teaches in par.86, lines 1-6 and par.89, lines 1-10). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler, as modified by Soni, to include automatically performing one or more investigative operations as taught and suggested by Humphrey for purpose of improving investigation efficiency and guide human users, increasing the efficiency of a human cyber security analyst and providing key investigation information to human operators (Humphrey, par.86). Scheidler, as modified by Soni and Humphrey, do not explicity teach wherein the assigned risk level is based, at least in part, upon an identity of the associated user.
Kirti teaches, similar system, wherein the assigned risk level is based, at least in part, upon an identity of the associated user (Kirti teaches that Activity data can include the user account or other user identifier for the user associated with the events or statistics as Kirti teaches in par.102, and par.115-116). It would have been obvious to one ordinary skill in the art before effective filling date to modify Scheidler, as modified by Soni and Humphrey, to include upon an identity of the associated user as taught and suggested by Kirti for purpose of determining a threat of security posed by the application based on use of the application and managing access to applications for minimizing security threats and risks in a computing environment of the organization (Kirti, abstract).
For claim 22, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches further comprising: if such monitored activity is indicative of a security event, generating an initial notification (such as alerts on the user interface to display) of the security event, wherein the initial notification includes a computer-readable language portion (the user interface to display text for the user to read) that defines one or more specifics of the security event (Scheidler col.5, lines 49-50 and col.19, lines 50-60).
For claim 23, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches further comprising: iteratively (defined as continuously present application in par.0058 on specification) processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65).
For claim 24, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches wherein the computing platform includes a plurality of security-relevant subsystems (systems such as database servers, web-server, Initial data import, and automatic job setup) (Scheidler col.6, lines 45-68 , col.col.11, lines 55-68 to col.12, lines 1-15).
For claim 25, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches wherein monitoring activity within a computing platform includes; monitoring activity within one or more of the plurality of security-relevant subsystems of the computing platform (systems monitors the user activities in the database servers, for analytics) (Scheidler col.6, lines 45-68 and col.12, 10-30).
For claim 26, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches wherein iteratively processing the initial notification using a generative AI model (machine learning) and a formatting script to produce a summarized human-readable report for the initial notification (Scheidler col.8, lines 9-20 and col.19, lines 50-65) includes: iteratively processing the initial notification using the generative AI model (machine learning), the formatting script and/or one or more tools (using user interface to show visualizing messaging alerts) to produce the summarized human- readable report for the initial notification (col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65).
For claim 27, Scheidler in view of Soni, Humphrey and Kirti discloses the system of claim 21. Scheidler further teaches wherein the one or more tools includes one or more of: a decoding tool to decode an encoded initial notification; a decompression tool to decompress a compressed initial notification; and an identification tool to identify an owner of a domain associated with the initial notification (the security blindshpotter using tool to identify) (Scheidler col.5,lines 10-15, and col.20, lines 34-50).
Claim(s) 8-9, 18-19, and 28-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scheidler et al (10681060) in views of Applicant IDS’s Soni et al (2024/0171451), Humphrey et al (2019/0260764) and Kirti et al (2017/0251013) as applied to claims above, and further in view of Meier-Hellstern et al (2023/0124288).
For claims 8, 18 and 28, Scheidler, as modified by Soni, Humphrey and Kirti, teaches all the limitation as previously set forth and further teaches wherein iteratively processing the initial notification using a generative AI model and a formatting script to produce a summarized human-readable report for the initial notification includes: iteratively processing the initial notification (col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65). However, Scheidler and Humphrey and Kirti fails to teach using a large language model.
Meier-Hellstern teaches, similar machine learning, using a large language model (par.23, lines 5-8). It would have been obvious to one ordinary skill in the art before effective filling date to modify machine learning of Scheidler, Soni, Humphrey and Kirti to include using a large language model as taught and suggested by Meier-Hellstern for purpose of detecting and filtering undesirable content associated with a plurality of different contexts and to select the appropriate prompt to use to query the language model with the user request given the determined context (Meier-Hellstern, par.23).
For claims 9, 19, and 29, Scheidler, as modified by Soni, Humphrey and Kirti, teaches all the limitation as previously set forth and further teaches wherein iteratively processing the initial notification using a generative AI model and a formatting script to produce a summarized human-readable report for the initial notification includes: produce the summarized human-readable report for the initial notification (col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65). However, Scheidler, Soni, and Humphrey and Kirti fails to teach utilizing prompt engineering.
Meier-Hellstern further teaches utilizing prompt engineering (par.23, lines 5-8). It would have been obvious to one ordinary skill in the art before effective filling date to modify machine learning of Scheidler, Soni, and Humphrey and Kirti to include utilizing prompt engineering as taught and suggested by Meier-Hellstern for purpose of detecting and filtering undesirable content associated with a plurality of different contexts and to select the appropriate prompt to use to query the language model with the user request given the determined context (Meier-Hellstern, par.23).
Claim(s) 10, 20 and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scheidler et al (10681060) in views of Applicant IDS’s Soni et al (2024/0171451), Humphrey et al (2019/0260764) and Kirti et al (2017/0251013) as applied to claims above, and further in view of Cella et al (2022/0187847).
For claims 10, 20 and 30, Scheidler, as modified by Soni, Humphrey and Kirti, teaches all the limitation as previously set forth and further teaches wherein iteratively processing the initial notification using a generative AI model and a formatting script to produce a summarized human-readable report for the initial notification includes: produce the summarized human-readable report for the initial notification (col.3,lines 15-25, col.8, lines 9-20 and col.19, lines 50-65). However, Scheidler and Soni, Humphrey and Kirti fails to teach utilizing several loops and/or nested loops.
Cella teaches, similar machine learning, utilizing several loops and/or nested loops (par.2419, lines 1-5). It would have been obvious to one ordinary skill in the art before effective filling date to modify machine learning of Scheidler, Soni, and Humphrey and Kirti to include loops and/or nested loops as taught and suggested by Cella for the purpose of improving robot operations by using the loop based learning capabilities to train itself to detect and provide guidance to avoid task execution risk factors, such as objects along a path, and the like (Cella, par.2419).
Response to Amendments/Arguments
Applicant’s arguments with respect to claim(s) 1, 4-11, 14-21, and 24-30 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Based on the applicant’s amendment into the claims 1, 11, and 21, the 101 rejections have been withdrawn in the view of the amendments.
The applicant’s arguments regarding new amendments limitations in claims 1, 11 and 21, has been considered but is moot, because the examiner applied new art, Soni et al (2024/0171451), that covers new amendments limitations.
Regarding dependent claims arguments, said arguments are moot because the applied references are not considered to have alleged differences, and therefore are considered to properly show that for which they were cited.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYUB A MAYE whose telephone number is (571)270-5037. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AYUB A MAYE/Examiner, Art Unit 2436
/AMIE C. LIN/Primary Examiner, Art Unit 2436