DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-14 received on 20 June 2024 are currently pending and being considered by Examiner in this Office Action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claims recite subject matter within a statutory category as a process (claims 11-14) and machine (claims 1-10) which recite steps of (Subject Matter Eligibility (SME) Test Step 1: Yes):
obtaining, via a user device interface, user data from at least one user device, the user data comprising at least one of health data, physiology data, mood data, and journal data;
assigning, via an intake and setup engine, a risk status to a user profile associated with the at least one user device;
obtaining, via an action plan engine, action plan data associated with the user profile;
analyzing, via a user data analysis engine, the user data to identify at least one of patterns, trends and shifts in the user data;
generating, via the user data analysis engine, feedback data to be provided to the at least one user device based on the analysis;
predicting, via the user data analysis engine, a crisis scenario based on the analysis; and
initiating, via a crisis intervention engine, a crisis intervention protocol when the crisis scenario is predicted, the crisis intervention protocol determined based on the assigned risk status.
These steps of obtaining user data, assigning a risk status to a user profile, obtaining action plan data, analyzing user data to identify at least one pattern, trend and/or shift in the user data, predicting a crisis scenario based on the analysis, and/or initiating a crisis intervention protocol, as drafted, under the broadest reasonable interpretation, includes performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the obtaining various forms of data/action plans language, obtaining data in the context of this claim encompasses a mental process of the user either reading or physically obtaining data from or more records. Similarly, the limitation of analyzing various data to predict a crisis scenario, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, such as a human observing various patient data, determining anomalies in said data, and determining that the patient is at a heightened risk for a crisis scenario. For example, but for the initiating a crisis intervention protocol language, initiating a crisis intervention protocol in the context of this claim encompasses a mental process of the human performing some sort of action in response to a determination that the patient is at a heightened risk for a crisis scenario. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
These steps of obtaining user data, assigning a risk status to a user profile, obtaining action plan data, analyzing user data to identify at least one pattern, trend and/or shift in the user data, predicting a crisis scenario based on the analysis, and/or initiating a crisis intervention protocol, as drafted, under the broadest reasonable interpretation, includes methods of organizing human activity. MPEP 2106.04(a)(2)(II) describes various methods of organizing human activity, including manager personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). Specific examples provided in MPEP 2106.04(a)(2)(II)(C) includes a mental process that a neurologist should follow when testing a patient for nervous system malfunctions. The steps presented above substantially relate to said specific example provided at least by the similar subject matter regarding a physician or user employing said computer system for determining patient crisis intervention efforts. Furthermore, at a much broader level, the steps recited in the claims and performed by the computer system heavily relate to a user, physician, etc., following certain instructions, i.e. protocols, in response to a determination that a patient is at a heightened risk status. That is, the system is effectively managing the personal behavior of the user, physician, etc., regarding instructions that should be followed regarding crisis interventions. As such, under BRI, the claim recites methods of organizing human activity, i.e. an abstract idea.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claim 2-10 & 12-14, reciting particular aspects of how analyzing data, determining a risk status, and/or determining a communication protocol may be performed in the mind but for recitation of generic computer components) (SME Test Step 2A, Prong 1: Yes).
This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which:
amount to mere instructions to apply an exception (such as recitation of a user device interface, a user device, an intake and setup engine, an action plan engine, a user data analysis engine, a crisis intervention engine amounts to invoking computers as a tool to perform the abstract idea, see applicant’s specification [035] for a user device interface, [024] for a user device, [036] for an intake and setup engine, [037] for an action plan engine, [038] for a user data analysis engine, [039] for a crisis intervention engine, see MPEP 2106.05(f));
add insignificant extra-solution activity to the abstract idea (such as recitation of obtain user data from the at least one user device, the user data comprising health data, physiology data, mood data, and journal data, obtain action plan data amounts to mere data gathering, recitation of generating feedback data to be provided to the at least one user device based on the analysis, analyze the user data to identify at least one patterns, trends, and shifts in the user data amounts to selecting a particular data source or type of data to be manipulated, recitation of assigning a risk status to a user profile associated with the at least one user device, predicting a crisis scenario based on the analysis, initiating a crisis intervention protocol when the crisis scenario is predicted amounts to insignificant application, see MPEP 2106.05(g));
generally link the abstract idea to a particular technological environment or field of use (such as recitation of crisis scenarios and suicide interventions via user devices, see MPEP 2106.05(h)).
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-10 & 12-14, recitation of one or more entity’s user devices, a user data analysis engine amounts to invoking computers as a tool to perform the abstract idea, see applicant’s specification [024] for one or more entity’s user devices, [038] for a user data analysis engine, additional limitations which amount to invoking computers as a tool to perform the abstract idea; claims 7-8 & 13, which recite limitations relating to alerting a therapist/peer support counselor via user device, alerting an emergency contact, alerting emergency medical services, obtaining geolocation information, providing geolocation information, sending a text message, additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering; claims 3-5, 10 & 13, which recite limitations relating to determining a communication protocol based on the assigned risk status analysis, generating a prompt on a user device, updating the action plan, additional limitations which add insignificant extra-solution activity to the abstract idea by selecting a particular data source or type of data to be manipulated; claims 2-3, which recite limitations relating to the user device being a specific type of user’s device, employing artificial intelligence and/or machine learning algorithms, additional limitations which generally link the abstract idea to a particular technological environment or field of use; claims 5-6, 9-10, 12-14, which recite limitations relating to initiating a phone call, recording crisis details, updating the action plan, activating a safety plan, generating a prompt via an application on the user device and/or initiating a phone call amounts to insignificant application, see MPEP 2106.05(g)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application (SME Test Step 2A, Prong 2: No).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields (such as obtain user data from the at least one user device, the user data comprising health data, physiology data, mood data, and journal data, obtain action plan data, initiating a crisis intervention protocol when the crisis scenario is predicted (which under BRI includes simply transmitting a phone call or alert), e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); generating feedback data to be provided to the at least one user device based on the analysis, analyze the user data to identify at least one patterns, trends, and shifts in the user data, assigning a risk status to a user profile associated with the at least one user device, predicting a crisis scenario based on the analysis, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); maintaining user action plan and profile data, such as in a medical record, e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); storing user data, storing risk statuses, storing action plan data, storing feedback data, storing computerized instructions to perform one or more steps recited, storing one or more computerized engines/modules for performing steps recited, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv); obtaining action plan data associated with a user profile, which under BRI could include data from one or more data records, e.g., electronic scanning or extracting data from a physical document, Content Extraction, MPEP 2106.05(d)(II)(v))
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-10 & 12-14, additional limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, claims 5, 7-8 & 13, which recite limitations relating to initiating a phone call, alerting a therapist/peer support counselor via user device, alerting an emergency contact, alerting emergency medical services, obtaining geolocation information, providing geolocation information, sending a text message,, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); claims 3-5, 6, 9-10 & 12-14, which recite limitations relating to initiating a phone call, recording crisis details, updating the action plan, activating a safety plan, generating a prompt via an application on the user device and/or initiating a phone call, determining a communication protocol based on the assigned risk status analysis, generating a prompt on a user device, updating the action plan, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); claim 13, which recites limitations relating to updating an action plan such as in an electronic medical record or other record, e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); claims 2-10 & 12-14, which recite limitations relating to storing user data, storing risk statuses, storing action plan data, storing feedback data, storing computerized instructions to perform one or more steps recited, storing one or more computerized engines/modules for performing steps recited, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation (SME Test Step 2B: No).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by LaBorde et al. (U.S. Patent Publication No. 2023/0138557), hereinafter “LaBorde”.
Claim 1 –
Regarding Claim 1, LaBorde discloses a system for predicting crisis scenarios and initiating suicide intervention comprising:
a user device interface configured to communicate with at least one user device and obtain user data from the at least one user device (See LaBorde Par [0061] which discloses an interface providing a data link layer and network layer functions such as formatting packet bits to an appropriate format from various devices, including client devices, network devices, etc., and further specifically mentions a network interface embodiment), the user data comprising at least one of health data, physiology data, mood data, and journal data (See LaBorde Par [0056] which discloses data inputs such as healthcare data from a variety of sources; See LaBorde Par [0224] which discloses the determination of moods and emotions for a user, i.e. mood data, recognized using computerized methods; See LaBorde Par [0188]-[0214] which discloses varying inputs that can be received by the system/models for determination of potential risk of patient self-harm/suicide);
an intake and setup engine configured to assign a risk status to a user profile associated with the at least one user device (See LaBorde Par [0013]-[0014], [0076], & [0112] which discloses one or more computerized embodiments, including an “engine” and/or memory with modules/controllers/servers for performing efforts described throughout LaBorde’s disclosure, and is therefore understood to read on each of the “engine” implementations recited hereinafter; See LaBorde Par [0077] which discloses predicting a suicide risk associated with a patient; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile);
an action plan engine configured to obtain action plan data associated with the user profile (See LaBorde Par [0287] which discloses a server also determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold);
a user data analysis engine (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques) configured to:
analyze the user data to identify at least one of patterns, trends and shifts in the user data (See LaBorde Par [0056] which discloses data inputs such as healthcare data from a variety of sources; See LaBorde Par [0224] which discloses the determination of moods and emotions for a user, i.e. mood data, recognized using computerized methods; See LaBorde Par [0188]-[0214] which discloses varying inputs that can be received by the system/models for determination of potential risk of patient self-harm/suicide; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0092] which discloses the system’s predictive analytics determining aggregate patterns over time and known outcomes from antecedent cases);
generate feedback data to be provided to the at least one user device based on the analysis (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0104] which discloses one or more user interfaces in one of its client applications that allows for visualization tool and aggregate views to be provided to one or more users/clients, and more specifically describes at LaBorde Par [0283] that a backend device generates a visualization or graphical image based upon the output from the SOM network, such that the backend device sends the graphical image to the server, which either sends it to the client, i.e. user, device and/or renders the image on a display of a website, such that it is understood that “feedback data” is thereby provided to a user device); and
predict a crisis scenario based on the analysis (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time; See LaBorde Par [0114] which discloses the DCE or server device utilizing machine learning to predict events, i.e. scenarios, related to one or more crises, such as risk of suicide or changes therein); and
a crisis intervention engine configured to initiate a crisis intervention protocol when the crisis scenario is predicted, the crisis intervention protocol determined based on the assigned risk status (See LaBorde Par [0287] which discloses a server determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold; See LaBorde Par [0285] which discloses one or more backend devices, such as one or more server devices, in which the backend devices can determine whether there is an opportunity of high probability of successfully mitigating the likelihood of a given predicted suicide by allocating resources; See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting a crisis intervention protocol).
Claim 2 –
Regarding Claim 2, LaBorde discloses the system of claim 1 in its entirety. LaBorde further discloses a system, wherein:
the at least one user device comprises at least one of a patient user device, a therapist user device, and a peer support counselor user device (See LaBorde Par [0010] which discloses the one or more client devices being used by healthcare and social services workers, patients, family members, and care givers amongst others).
Claim 3 –
Regarding Claim 3, LaBorde discloses the system of claim 1 in its entirety. LaBorde further discloses a system, wherein:
the user data analysis engine is further configured to employ artificial intelligence and machine learning algorithms in analyzing the user data and predicting the crisis scenario (See LaBorde Par [0004] which discloses the use of AI technologies, such as machine learning and deep learning throughout he disclosure; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time; See LaBorde Par [0114] which discloses the DCE or server device utilizing machine learning to predict events, i.e. scenarios, related to one or more crises, such as risk of suicide or changes therein).
Claim 4 –
Regarding Claim 4, LaBorde discloses the system of claim 1 in its entirety. LaBorde further discloses a system, wherein:
initiating the crisis intervention protocol comprises:
determining a communication protocol based on the assigned risk status (See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting a crisis intervention, i.e. communication, protocol; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile);
initiating communication with the at least one user device according to the communication protocol (See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting initiation of a crisis intervention, i.e. communication, protocol);
activating a safety plan if a response is not received from the at least one user device within a predetermined time period (See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that it is determined whether or not contact was established with; See LaBorde Par [0114] which discloses determining a patient was not able to be reached, the trained model a patient can be further flagged for risk for suicide, leading to a significant increase in the individuals predicted risk for suicide, and thereby interventions determined at LaBorde Par [0111]-[0112] & [0285]-[0287]).
Claim 5 –
Regarding Claim 5, LaBorde discloses the system of claim 4 in its entirety. LaBorde further discloses a system, wherein:
the communication protocol for a low risk status comprises at least one of generating a prompt via an application on the at least one user device and sending a text message to the at least one user device (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0014] which discloses sending a request message to the server device and requesting/receiving a reply message from said client devices; See LaBorde Par [0287] which discloses a server determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold; See LaBorde Par [0285] which discloses one or more backend devices, such as one or more server devices, in which the backend devices can determine whether there is an opportunity of high probability of successfully mitigating the likelihood of a given predicted suicide by allocating resources).
Claim 6 –
Regarding Claim 6, LaBorde discloses the system of claim 4 in its entirety. LaBorde further discloses a system, wherein:
the communication protocol for a high risk status comprises initiating a phone call to the at least one user device (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that the system can be configured to ask particular individuals to check on (call or visit) the individual and asked to report back specific information via the system's client application).
Claim 7 –
Regarding Claim 7, LaBorde discloses the system of claim 6 in its entirety. LaBorde further discloses a system, further comprising:
alerting at least one of a therapist user device and a peer support counselor user device of a requirement to initiate the phone call to the at least one user device (See LaBorde Par [0010] which discloses the one or more client devices being used by healthcare and social services workers, patients, family members, and care givers amongst others; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that the system can be configured to ask particular individuals to check on (call or visit) the individual and asked to report back specific information via the system's client application).
Claim 8 –
Regarding Claim 8, LaBorde discloses the system of claim 4 in its entirety. LaBorde further discloses a system, wherein:
activating the safety plan comprises at least one of:
alerting an emergency contact (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that the system can be configured to ask particular authorized individuals, i.e. emergency contacts, to check on (call or visit) the individual and asked to report back specific information via the system's client application);
alerting emergency medical services (While not “emergency medical services” per se, see LaBorde Par [0106] which discloses the system recommends at particular junctures across a patient’s care and/or risk statuses, especially a patient with a high risk for suicide, needing a referral for mental health services locally or in the community or at another facility, be it psychological services, psychiatry services, group counseling services, home health services, etc., and further describes in LaBorde Par [0107] & [0287] that the timeliness of services rendered and frequency of services rendered can be determined by the system; additionally, while LaBorde may not entirely read on “emergency medical services” per se, the “activating the safety plan comprises at least one of” language renders this limitation optional and therefore LaBorde still reads on the entirety of this claim, unless this limitation is specifically elected in future amendments)
obtaining geolocation information associated with the at least one user device (See LaBorde Par [0057] & [0120] which discloses receiving data including location data and/or patient location, such as via RFID tags); and
providing the geolocation information to at least one entity associated with the safety plan (See LaBorde Par [0057] & [0120] which discloses receiving data including location data and/or patient location, such as via RFID tags, such that the RCE records can be transmitted to the server and subsequently provided to users; See LaBorde Par [0104] which discloses one or more user interfaces in one of its client applications that allows for visualization tool and aggregate views to be provided to one or more users/clients, and more specifically describes at LaBorde Par [0283] that a backend device generates a visualization or graphical image based upon the output from the SOM network, such that the backend device sends the graphical image to the server, which either sends it to the client, i.e. user, device and/or renders the image on a display of a website, such that the geolocation information would thereby be provided to a user device).
Claim 9 –
Regarding Claim 9, LaBorde discloses the system of claim 1 in its entirety. LaBorde further discloses a system, further comprising:
recording crisis details when the crisis scenario is predicted (See LaBorde Par [0066]-[0067] which discloses recording data indicative of an event, e.g. crisis scenario, including near field communication (NFC) established with the DCE or another RFID tag, a time duration for which the RFID tag has been within a certain location, historical data, etc.; See LaBorde Par [0077]-[0078] which discloses determining data associated with the identification for each of the one or more RFID tags and upon predicting a risk associated with a patient event, storing/recording said data from the DCE int eh database to be associated with said RFID/event tag),
the crisis details comprising user data associated with the crisis scenario (See LaBorde Par [0066]-[0067] which discloses recording data indicative of an event, e.g. crisis scenario, including near field communication (NFC) established with the DCE or another RFID tag, a time duration for which the RFID tag 304 has been within a certain location, historical data, etc., i.e. user data; See LaBorde Par [0077]-[0078] which discloses determining data associated with the identification for each of the one or more RFID tags and upon predicting a risk associated with a patient event, storing/recording said data from the DCE int eh database to be associated with said RFID/event tag).
Claim 10 –
Regarding Claim 10, LaBorde discloses the system of claim 9 in its entirety. LaBorde further discloses a system, further comprising:
updating the action plan based on the recorded crisis details (See LaBorde Par [0066]-[0067] which discloses recording data indicative of an event, e.g. crisis scenario, including near field communication (NFC) established with the DCE or another RFID tag, a time duration for which the RFID tag has been within a certain location, historical data, etc.; See LaBorde Par [0077]-[0078] which discloses determining data associated with the identification for each of the one or more RFID tags and upon predicting a risk associated with a patient event, storing/recording said data from the DCE int eh database to be associated with said RFID/event tag; See LaBorde Par [0088] which discloses providing real time analytics about patients that have been identified to be at risk and changes in their status or attributes of their medical care and services over time, i.e. the action plan, e.g. services and/or medical care, are updated in real-time based on said event data and patient risk profile data/metrics).
Claim 11 –
Regarding Claim 11, LaBorde discloses a method for predicting crisis scenarios and initiating suicide intervention, comprising:
obtaining, via a user device interface, user data from at least one user device, the user data comprising at least one of health data, physiology data, mood data, and journal data (See LaBorde Par [0056] which discloses data inputs such as healthcare data from a variety of sources; See LaBorde Par [0224] which discloses the determination of moods and emotions for a user, i.e. mood data, recognized using computerized methods; See LaBorde Par [0188]-[0214] which discloses varying inputs that can be received by the system/models for determination of potential risk of patient self-harm/suicide);
assigning, via an intake and setup engine, a risk status to a user profile associated with the at least one user device (See LaBorde Par [0013]-[0014], [0076], & [0112] which discloses one or more computerized embodiments, including an “engine” and/or memory with modules/controllers/servers for performing efforts described throughout LaBorde’s disclosure, and is therefore understood to read on each of the “engine” implementations recited hereinafter; See LaBorde Par [0077] which discloses predicting a suicide risk associated with a patient; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile);
obtaining, via an action plan engine, action plan data associated with the user profile (See LaBorde Par [0287] which discloses a server also determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold);
analyzing, via a user data analysis engine, the user data to identify at least one of patterns, trends and shifts in the user data (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques; See LaBorde Par [0056] which discloses data inputs such as healthcare data from a variety of sources; See LaBorde Par [0224] which discloses the determination of moods and emotions for a user, i.e. mood data, recognized using computerized methods; See LaBorde Par [0188]-[0214] which discloses varying inputs that can be received by the system/models for determination of potential risk of patient self-harm/suicide; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0092] which discloses the system’s predictive analytics determining aggregate patterns over time and known outcomes from antecedent cases);
generating, via the user data analysis engine, feedback data to be provided to the at least one user device based on the analysis (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0104] which discloses one or more user interfaces in one of its client applications that allows for visualization tool and aggregate views to be provided to one or more users/clients, and more specifically describes at LaBorde Par [0283] that a backend device generates a visualization or graphical image based upon the output from the SOM network, such that the backend device sends the graphical image to the server, which either sends it to the client, i.e. user, device and/or renders the image on a display of a website, such that it is understood that “feedback data” is thereby provided to a user device);
predicting, via the user data analysis engine, a crisis scenario based on the analysis (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time; See LaBorde Par [0114] which discloses the DCE or server device utilizing machine learning to predict events, i.e. scenarios, related to one or more crises, such as risk of suicide or changes therein); and
initiating, via a crisis intervention engine, a crisis intervention protocol when the crisis scenario is predicted, the crisis intervention protocol determined based on the assigned risk status (See LaBorde Par [0287] which discloses a server determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold; See LaBorde Par [0285] which discloses one or more backend devices, such as one or more server devices, in which the backend devices can determine whether there is an opportunity of high probability of successfully mitigating the likelihood of a given predicted suicide by allocating resources; See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting a crisis intervention protocol).
Claim 12 –
Regarding Claim 12, LaBorde discloses the method of Claim 11 in its entirety. LaBorde further discloses a method, wherein:
initiating the crisis intervention protocol comprises:
determining a communication protocol based on the assigned risk status (See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting a crisis intervention, i.e. communication, protocol; See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile);
initiating communication with the at least one user device according to the communication protocol (See LaBorde Par [0111] which discloses determining what action needs to be taken or automated routing of notifications from the system to a call center for an outreach call/check by an individual with specific training in suicide prevention and/or potential notification of individuals authorized for said patient, including family members, spouses, healthcare providers, case managers, social workers, or any other individuals, such that the system can be configured to request specific actions by a plurality of said individuals, thereby constituting initiation of a crisis intervention, i.e. communication, protocol);
activating a safety plan if a response is not received from the at least one user device within a predetermined time period (See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that it is determined whether or not contact was established with; See LaBorde Par [0114] which discloses determining a patient was not able to be reached, the trained model a patient can be further flagged for risk for suicide, leading to a significant increase in the individuals predicted risk for suicide, and thereby interventions determined at LaBorde Par [0111]-[0112] & [0285]-[0287]).
Claim 13 –
Regarding Claim 13, LaBorde discloses the method of Claim 12 in its entirety. LaBorde further discloses a method, wherein:
the communication protocol for a low risk status comprises at least one of generating a prompt via an application on the at least one user device and sending a text message to the at least one user device (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0014] which discloses sending a request message to the server device and requesting/receiving a reply message from said client devices; See LaBorde Par [0287] which discloses a server determining whether deployment of any given resource is likely to mitigate the predicted suicide risk for a given patient, such that the server recommends the deployment of an available resource if the probability weighted reduction in the risk of suicide exceeded a particular threshold; See LaBorde Par [0285] which discloses one or more backend devices, such as one or more server devices, in which the backend devices can determine whether there is an opportunity of high probability of successfully mitigating the likelihood of a given predicted suicide by allocating resources).
Claim 14 –
Regarding Claim 14, LaBorde discloses the method of Claim 12 in its entirety. LaBorde further discloses a method, wherein:
the communication protocol for a high risk status comprises initiating a phone call to the at least one user device (See LaBorde Par [0112] which discloses the DCE or server device determining the patient’s risk for suicide based upon data received and utilizing machine learning algorithms techniques, such that the probability that a patient is at risk or will attempt to take their life or changes in the risk profile or “risk signature” over time, i.e. risk status being assigned to a user profile; See LaBorde Par [0111] which discloses obtaining real time situational awareness and alerts about individuals in need of intervention, outreach, etc., such that the system can be configured to ask particular individuals to check on (call or visit) the individual and asked to report back specific information via the system's client application).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Shin et al. (Reference U – NPL – “Information and communication technology‑based interventions for suicide prevention implemented in clinical settings: a scoping review” – March 2023) reviews and identifies evidence on ICT-based interventions for suicide prevention implemented in clinical settings and characterizing implementation barriers and facilitators, as well as evaluation outcomes, and measures;
Rowland et al. (U.S. Patent Publication No. 2023/0101506) discloses provisioning of an emergency health service, including in scenarios of suicide-prevention efforts;
Diwan et al. (U.S. Patent Publication No. 2023/0097608) discloses a system for establishing instant communication session with health providers, given certain risk scores that are generated using predictive models associated with conditions and identifies specialists based on the keywords and the risk scores, including aspects for suicide-prevention;
Devitt et al. (U.S. Patent Publication No. 2022/0223292) discloses a system for utilizing digital forensics, artificial intelligence, and machine learning models to prevent suicidal behavior, such that the system generates a model associated with predictive behavior and patterns correlated with suicide and identifies individuals having characteristics and/or behaviors correlated with the model, and initiates services to reduce the risk for suicide for such individuals.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNT