DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a final office action in response to communications received 01/12/2026. Claims 1, 4, 8, 9, 11, 14, 16, 19 have been amended. Claims 2, 11, 17 have been cancelled. Therefore, claims 1, 3-11, 13-16, 18-20 are pending and addressed below.
Response to Arguments
Applicant’s arguments filed 01/12/2026 have been fully considered but they are not persuasive. Applicant argues that by incorporating previous indicated dependent claims 2, 12, 17, respectively, into independent claims, the claims are therefore allowable, however the incorporated limitations, now cite “wherein the second set of signals indicative of the expected values comprises at least one of:…”. Therefore Examiner maintains the same rejections as set forth in the previous office action.
Allowable Subject Matter
Claims 4-7 (claims 5-7 are dependent on claim 4), 14, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 8-11, 13, 15, 16, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Oliver (Pub. No. US 2024/0354405) in view of Bender et al (Pub. No. US 2019/0362071).
As per claim 1, Oliver discloses a method for configuring execution of a service in a distributed services system, the method comprising: receiving, by a computer system, a first set of signals generated by one or more first machine learning models, the first set of signals indicative of a predicted risk associated with the execution of the service for a user; receiving, by the computer system, a second set of signals generated by one or more second machine learning models, the second set of signals indicative of expected values associated with the execution of the service for the user (…the computer system comprises a machine learning (ML) platform at which prior alerts are received from the endpoints…and divided into a plurality of clusters…each of the clusters has an associated cluster profile that specifies expected value constraints for attributes to new alerts (actual values) that determined to belong to the cluster…see par. 6, 50); determining, by the computer system, a corresponding expected value associated with each of a plurality of actions, wherein each of the plurality of actions corresponds to a configuration of the service (…comparing values of attributes for actual values to respective value constraints for attributes specified in expected value constraints…see par. 47); wherein the second set of signals indicative of the expected values comprises at least one of: a third set of one or more signals generated by a first user value machine learning model trained to detect a probability that each of the plurality of actions will cause the user to stop usage of the service; a fourth set of one or more signals generated by a second user value machine learning model trained to detect a value amount to the distributed services system that results from usage of the service by the user (see par. 46-50); a fifth set of one or more signals generated by a third user value machine learning model trained to detect a value amount added to the distributed services system that results from a usage of the distributed services system by a user system; and a sixth set of one or more signals generated by a fourth user value machine learning model trained to detect a value of a standing of a user system with the distributed services system. Oliver does not explicitly disclose selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system. However Bender discloses selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system (…if the security risks have attained or exceeded the corresponding problem risk level for the user or device…the user (or subscriber) is notified (e.g., via security module) of the security risks or problems and corresponding actions…the notification may indicate the security risks and corresponding remediations for the network device based on the user and/or device profile (e.g., security risks and remediations satisfying the configured problem and remediation risk levels for the device or user, all security risks and remediations, etc.)…thus, the actions (and security service) are tailored to the specific network device…for example, a user may register a network device in the form of a heart rate monitor…rumors start appearing on network sites (e.g., web pages, etc.) that the heart rate monitor is calculating heart rate incorrectly when the environmental temperature is below a certain level (e.g., below freezing or 32° F.)…a profile for this device indicates the user accepts a high level of risk (e.g., a high problem risk level and low remediation risk level (to apply only trusted remediations)), thereby preventing occurrence of remediation or quarantine based on the rumors (since the rumors are a low level risk)…as more reports arise for this problem, the problem risk level increases from low to medium which is still insufficient to trigger a remediation…however, the device manufacturer reports a problem, and supplies a software fix (or patch) for the heart rate monitor…at this point, the problem risk level is increased to high, and the remediation is low risk (or trustworthy)…accordingly, the problem and remediation meet the problem and remediation risk levels, and the heart rate monitor is automatically updated with the software patch provided by the manufacturer, see par. 58, 63). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Bender in Oliver for including the above limitations because one ordinary skill in the art would recognize it would further improve security to devices connected to a network…see Bender, par. 3.
As per claim 11, Oliver discloses a non-transitory machine-readable medium, having instructions stored therein, which when executed by a computer system having at least one processor, cause the computer system to perform operations for configuring execution of a service in a distributed services system, the operations comprising:
receiving, by the computer system, a first set of signals generated by one or more first machine learning models, the first set of signals indicative of a predicted risk associated with the execution of the service for a user; receiving, by the computer system, a second set of signals generated by one or more second machine learning models, the second set of signals indicative of expected values associated with the execution of the service for the user (…the computer system comprises a machine learning (ML) platform at which prior alerts are received from the endpoints…and divided into a plurality of clusters…each of the clusters has an associated cluster profile that specifies expected value constraints for attributes to new alerts (actual values) that determined to belong to the cluster…see par. 6, 50);
determining, by the computer system, a corresponding expected value associated with each of a plurality of actions, wherein each of the plurality of actions corresponds to a configuration of the service (…comparing values of attributes for actual values to respective value constraints for attributes specified in expected value constraints…see par. 47); wherein the second set of signals indicative of the expected values comprises at least one of: a third set of one or more signals generated by a first user value machine learning model trained to detect a probability that each of the plurality of actions will cause the user to stop usage of the service; a fourth set of one or more signals generated by a second user value machine learning model trained to detect a value amount to the distributed services system that results from usage of the service by the user (see par. 46-50); a fifth set of one or more signals generated by a third user value machine learning model trained to detect a value amount added to the distributed services system that results from a usage of the distributed services system by a user system; and a sixth set of one or more signals generated by a fourth user value machine learning model trained to detect a value of a standing of a user system with the distributed services system. Oliver does not explicitly disclose selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system. However Bender discloses selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system (…if the security risks have attained or exceeded the corresponding problem risk level for the user or device…the user (or subscriber) is notified (e.g., via security module) of the security risks or problems and corresponding actions…the notification may indicate the security risks and corresponding remediations for the network device based on the user and/or device profile (e.g., security risks and remediations satisfying the configured problem and remediation risk levels for the device or user, all security risks and remediations, etc.)…thus, the actions (and security service) are tailored to the specific network device…for example, a user may register a network device in the form of a heart rate monitor…rumors start appearing on network sites (e.g., web pages, etc.) that the heart rate monitor is calculating heart rate incorrectly when the environmental temperature is below a certain level (e.g., below freezing or 32° F.)…a profile for this device indicates the user accepts a high level of risk (e.g., a high problem risk level and low remediation risk level (to apply only trusted remediations)), thereby preventing occurrence of remediation or quarantine based on the rumors (since the rumors are a low level risk)…as more reports arise for this problem, the problem risk level increases from low to medium which is still insufficient to trigger a remediation…however, the device manufacturer reports a problem, and supplies a software fix (or patch) for the heart rate monitor…at this point, the problem risk level is increased to high, and the remediation is low risk (or trustworthy)…accordingly, the problem and remediation meet the problem and remediation risk levels, and the heart rate monitor is automatically updated with the software patch provided by the manufacturer, see par. 58, 63). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Bender in Oliver for including the above limitations because one ordinary skill in the art would recognize it would further improve security to devices connected to a network…see Bender, par. 3.
As per claim 16, Oliver discloses a computer system, comprising: a memory storing one or more instructions; and a processor, coupled with the memory, configured to execute the one or more instructions causing the computer system to perform operations, comprising: receiving a first set of signals generated by one or more first machine learning models, the first set of signals indicative of a predicted risk associated with execution of a service for a user in a distributed service system, receiving a second set of signals generated by one or more second machine learning models, the second set of signals indicative of expected values associated with the execution of the service for the user (…the computer system comprises a machine learning (ML) platform at which prior alerts are received from the endpoints…and divided into a plurality of clusters…each of the clusters has an associated cluster profile that specifies expected value constraints for attributes to new alerts (actual values) that determined to belong to the cluster…see par. 6, 50),
determining a corresponding expected value associated with each of a plurality of actions, wherein each of the plurality of actions corresponds to a configuration of the service (…comparing values of attributes for actual values to respective value constraints for attributes specified in expected value constraints…see par. 47); wherein the second set of signals indicative of the expected values comprises at least one of: a third set of one or more signals generated by a first user value machine learning model trained to detect a probability that each of the plurality of actions will cause the user to stop usage of the service; a fourth set of one or more signals generated by a second user value machine learning model trained to detect a value amount to the distributed services system that results from usage of the service by the user (see par. 46-50); a fifth set of one or more signals generated by a third user value machine learning model trained to detect a value amount added to the distributed services system that results from a usage of the distributed services system by a user system; and a sixth set of one or more signals generated by a fourth user value machine learning model trained to detect a value of a standing of a user system with the distributed services system. Oliver does not explicitly disclose selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system. However Bender discloses selecting, by the computer system, an action from the plurality of actions that maximizes the expected value associated with the execution of the service based on the predicted risk; and executing, by the computer system, the action configuring the execution of the service for the user in the distributed services system (…if the security risks have attained or exceeded the corresponding problem risk level for the user or device…the user (or subscriber) is notified (e.g., via security module) of the security risks or problems and corresponding actions…the notification may indicate the security risks and corresponding remediations for the network device based on the user and/or device profile (e.g., security risks and remediations satisfying the configured problem and remediation risk levels for the device or user, all security risks and remediations, etc.)…thus, the actions (and security service) are tailored to the specific network device…for example, a user may register a network device in the form of a heart rate monitor…rumors start appearing on network sites (e.g., web pages, etc.) that the heart rate monitor is calculating heart rate incorrectly when the environmental temperature is below a certain level (e.g., below freezing or 32° F.)…a profile for this device indicates the user accepts a high level of risk (e.g., a high problem risk level and low remediation risk level (to apply only trusted remediations)), thereby preventing occurrence of remediation or quarantine based on the rumors (since the rumors are a low level risk)…as more reports arise for this problem, the problem risk level increases from low to medium which is still insufficient to trigger a remediation…however, the device manufacturer reports a problem, and supplies a software fix (or patch) for the heart rate monitor…at this point, the problem risk level is increased to high, and the remediation is low risk (or trustworthy)…accordingly, the problem and remediation meet the problem and remediation risk levels, and the heart rate monitor is automatically updated with the software patch provided by the manufacturer, see par. 58, 63). Therefore one ordinary skill in the art would have found it obvious before the effective filling date of the claimed invention to use Bender in Oliver for including the above limitations because one ordinary skill in the art would recognize it would further improve security to devices connected to a network…see Bender, par. 3.
As per claims 3, 13, 18, the combination of Oliver and Bender discloses adjusting, by the computer system, the predicted risk by a risk value, the risk value indicative of an amount of risk associated with the user, a segment of users associated with the user, or the service to be executed for the user; generating, by the computer system, an adjusted expected value associated with each of the plurality of actions based on the adjusted predicted risk; and selecting, by the computer system, the action from the plurality of actions that maximizes the adjusted expected value associated with the execution of the service based on the adjusted predicted risk (Bender: see par. 39-41, 53-54). The motivation for claims 3, 13, 18 is the same motivation as in claims 1, 11, 16 above.
As per claim 8, the combination of Oliver and Bender discloses wherein the first set of signals indicative of the predicted risk comprises signals associated with a predicted loss due to fraud associated with the execution of the service (Oliver: see par. 29, 42).
As per claim 9, the combination of Oliver and Bender discloses wherein the first set signals indicative of the predicted risk comprises signals associated with a predicted loss due to a failure by the user to satisfy an obligation associated with the execution of the service (Bender: see par. 63). The motivation for claim 9 is the same motivation as in claim 1 above.
As per claims 10, 15, 20, the combination of Oliver and Bender discloses wherein the one or more first machine learning models and the one or more second machine learning models are independent sets of machine learning models (Oliver: see par. 44-45).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-form 892).
The following Patents and Papers are cited to further show the state of the art at the time of Applicant’s invention with respect to configure execution of a service in a distributed services system.
Crabtree et al (Pub. No. US 2022/0012814); “Platform for Autonomous Risk Assessment and Quantification for Cyber Insurance Policies”;
-Teaches analyze the likelihood of operational interruption or loss from a plurality of computer and information technology related risks by utilizing machine learning to predict risk from both accidental events and deliberate malicious activity…see par. 7.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GHAZAL B SHEHNI whose telephone number is (571)270-7479. The examiner can normally be reached Mon-Fri 9am-5pm PCT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GHAZAL B SHEHNI/Primary Examiner, Art Unit 2499