DETAILED ACTION
Status of the Claims
The following is a Final Office Action in response to amendments and remarks filed 06 January 2026.
Claims 1-2, 4, 7-8, 11-12, 14, and 17-20 have been amended.
Claims 1-20 are pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicants argue that the 35 U.S.C. 101 rejection under the Alice Corp. vs. CLS Bank Int’l be withdrawn; however the Examiner respectfully disagrees. The Examiner notes that in order to be patent eligible under 35 U.S.C. 101, the claims must be directed towards a patent eligible concept, which, the instant claims are not directed. Contrary to Applicants’ assertion that the claims are not a certain method of organizing human activity or a mental process, the Examiner notes that identifying and providing customized reaction plans for an incident to users within an environment that the incident occurs is a function that dispatchers, emergency responders, law enforcement, disaster relief, search parties, etc. have traditionally performed/provided for users during some sort of incident where planning is needed. Next, the claims are not directed to a practical application of the concept. The claims do not result in improvements to the functioning of a computer or to any other technology or technical field. They do not effect a particular treatment for a disease. They are not applied with or by a particular machine. They do not effect a transformation or reduction of a particular article to a different state or thing. And they are not applied in some other meaningful way beyond generally linking the use of the judicial exception (i.e., identifying and providing customized reaction plans for an incident to users within an environment that the incident occurs) to a particular technological environment (i.e., with the use of computers or generic computing components). Here, again as noted in the previous rejection, mere instructions to apply an exception using a generic computer component cannot provide an inventive concept - MPEP 2016.05(f). Contrary to Applicant’s assertions, the newly amended aspects of a graphical user interfaces does not necessarily root the claims into computer technology. As noted in the updated rejection below, the “user device(s)” “first graphical user interface (GUI) for a first user device,” and “second GUI for a second user device” in the step(s) which are insignificant extrasolution activities that do not integrate the claims into a practical application and are also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs court decisions in MPEP 2106.05(d)(II) indicate that the mere receipt or transmission of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic manner (as is here). Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole. As such, this argument is not persuasive, and the rejection not overcome.
The Examiner also notes that the examples provided by the USPTO are purely hypothetical and are not the benchmark of patent eligibility. This argument appears to be whether or not the use of computer or computing components for increased speed and efficiency integrates the claims into a practical application; however the Examiner respectfully disagrees. Nor, in addressing the second step of Alice, does claiming the improved speed or efficiency inherent with applying the abstract idea on a computer provide a sufficient inventive concept. See Bancorp Servs., LLC v. Sun Life Assurance Co. of Can., 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”); CLS Bank, Int’l v. Alice Corp., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) aff’d, 134 S. Ct. 2347 (2014) (“[S]imply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.” (citations omitted)).
Applicant’s argue that the Johnson reference does not disclose the amended claims; however the Examiner respectfully disagrees. Contrary to Applicant’s assertions, as previously cited Johnson does disclose the amended aspects (from claim 2). More specifically, “An example notification may be that an emergency responder may indicate that a rescue operation has been successfully executed or that an emergency responder is experiencing an unforeseen delay. After receiving updates, the system continuously processes the new information and adjusts the emergency response plan as appropriate, ¶130; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual. The server receives more information in light of the emergency situation (415A). For instance, the information can include real time information about occupancy of a shelter or confirmation from emergency responders that they are prepared to respond to the emergency incidents. The server can receive current information regarding the emergency situation from news outlets, social media, and other public information outlets. The information may include occupant and shelter status, road closures, environmental and weather conditions, or other information, such as information that may facilitate an efficient emergency response, (Johnson ¶126-¶127);” “The bottom of the display of the mobile device 108 is taken up by the map 326 which may be generated from a combination of data sources, e.g., using techniques such as those for creating mash-ups with GOOGLE MAPS™. For example, the central server may provide a latitude and longitude for the emergency responder 102a and a latitude and longitude for the shelter 106a, to a navigation system that is publicly available (via a published application programming interface (API)), and a navigation system may respond by providing data for drawing the map overlaid with a thick navigation route line for an optimal path between the two points for the emergency responder 102a. This path may be superimposed on the map 326 that responder 102a is shown. In addition, actual icons 135 and 147 are superimposed on the map to show the emergency responder 102a where relevant resources are located near their route between their current location and the shelter 106a. In this case, a hospital is represented by the icon 135, and a grocery store is represented by icon 147, (Johnson ¶118)” which is clearly disclosing the customized, individual reaction plan of not only how updates from the responder allow for updates to the plan but also how an emergency responder/rescuer is to navigate to the emergency or from an emergency (i.e. responders having delays are individualized and not all responders would get the same navigation instructions as they are potentially in different locations). Furthermore, and as noted in the updated rejection below, Johnson is also able to provide information based upon the responder/rescuer’s expertise and equipment knowledge at a particular shelter which is clearly an individual, customized plan (see Johnson ¶37) as well as the ability to plot an itinerary for each responder (see Johnson ¶59). Still further, (as previously cited in rejected claims 4 and 14), Johnson discloses the ability to “For example, the dispatcher may have learned that the shelter 106a has a diabetic occupant or has other issues that the emergency responder 102a should be aware of. That information may be provided in the custom text area 324 so that the emergency responders can immediately see it if they are looking at their devices while they run or otherwise make their way toward the shelter 106a. Earlier text may be scrolled upward as additional text is added to the custom text area 324, and a “Communicate with Dispatchers” button 330 may also be shown so as to allow multiple-way communication among the dispatcher and the emergency responders. The emergency responders can request information from the central server via the button 330. The chat text may be populated directly by typing of various users, or by users speaking and their spoken words being converted into text. All such spoken communications between and among the people involved in the event may also be converted in a similar manner to be stored with a summary report for the event, (Johnson ¶120)” which is clearly disclosing the ability to send emergency responder 102a some sort of unique, individualized, custom information as part of a plan or instruction. The Examiner has broadly interpreted, as one of ordinary skill in the art would do, the ability to provide navigation, itinerary, information based upon expertise and specific occupants/patients as well as the ability to continually update plans as potential delays arise, as the ability to provide individual customized reaction plans based upon formfactors of the devices. As such, arguments are not persuasive, and the rejection has not been overcome.
Applicant next argues that Goldstein does not teach the ability to provide a sonic alert or instruction to a first device and no sonic alert of instruction to a second device; however the Examiner disagrees for a plurality of reasons. Firstly, as currently recited, the claim only requires a sonic alert or instruction be sent to a first device, as the second device could receive no alert of instruction whatsoever (not only a non-sonic alert or instruction as Applicant appears to argue). Secondly, contrary to Applicants assertions, Goldstein teaches both of these interpretations of the claims. As previously cited, Goldstein teaches how a sonic, ultrasonic, or other audible alert is able to be sent (Goldstein ¶236, ¶238, ¶296, ¶319, ¶661), but also discusses, as previously cited, silent alarms being sent to some of the users (Goldstein ¶296, conveniently omitted by Applicant’s remarks). As such, arguments are not persuasive, and the rejection has not been overcome.
Applicant’s remaining rejections have been addressed in the updated rejection below, as necessitated by amendments.
In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the Applicants in regards to distinctly and specifically pointing out the supposed errors in the Examiner's prior office action (37 CFR 1.111). The Examiner asserts that the Applicants only argue that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (method) (an act, or series of acts or steps) and a machine (system comprising a processor) (a concrete thing, consisting of parts, or of certain devices and combination of devices). Thus, each of the claims falls within one of the four statutory categories (Step 1). However, the claim(s) recite(s) identifying and providing customized reaction plans for an incident to users within an environment that the incident occurs which is an abstract idea of organizing human activities as well as a mental process.
The limitations of “identifying....a plurality of user devices to receive a reaction plan for an ongoing incident in an environment; customizing the reaction plan to produce a plurality of individualized versions, wherein each individualized version corresponds to an individual user device of the plurality of user devices and is customized based a device type of each individual user device and a role of a user associated with each individual user device in responding to the ongoing incident; transmitting the individualized version to each corresponding individual user..., causing...output the respective individualized version of the reaction plan” as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and/or a mental process—concepts performed in the human mind (including an observation, evaluation, judgment, opinion)but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “at a central service,” (or “A system, comprising: a processor; and a memory including instructions, that when executed by the processor perform operations comprising:” in claim 11) nothing in the claim element precludes the step from the methods of organizing human interactions grouping and/or practically being performed in the mind. For example, but for the “at a central service,” (or “A system, comprising: a processor; and a memory including instructions, that when executed by the processor perform operations comprising:” in claim 11) language, “identifying,” “customizing,” “transmitting,” and “causing” in the context of this claim encompasses the user manually determining individuals to receive an reaction plan, customizing the plan and providing the plan to the users which is a managing personal behavior within an incident environment or a mental process/judgement with respect to users, incidents, and locations. However, if possible, the Examiner should consider the limitations together as a single abstract idea rather than as a plurality of separate abstract ideas to be analyzed individually. “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the limitations are considered together as a single abstract idea for further analysis. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitations as a mathematical concept, while some of the limitations may be performed in the mind after certain limitations are performed, but for the recitation of generic computer components, then it falls within the grouping of abstract ideas. (Step 2A, Prong One: YES). Accordingly, the claim(s) recite(s) an abstract idea.
This judicial exception is not integrated into a practical application (Step 2A Prong Two). The “user device(s)” “first graphical user interface (GUI) for a first user device,” and “second GUI for a second user device” are simply involved in insignificant post solution output. Next, the claim only recites one additional element – using a central service or processor to perform the steps. The central service or processor in the steps is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole (Step 2A Prong Two: NO).
The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a central service or processor to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Reevaluating here in step 2B, the “user device(s)” “first graphical user interface (GUI) for a first user device,” and “second GUI for a second user device” in the step(s) which are insignificant extrasolution activities are also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs court decisions in MPEP 2106.05(d)(II) indicate that the mere receipt or transmission of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic manner (as is here). Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole (Step 2B: NO).
Claims 2, 4, 12, and 14 recite(s) the additional limitation(s) further limiting the user devices (GUI, wireless) which is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1 and 11, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 3, 5-6, 8-10, 13, 15-16, and 18-20 recite(s) the additional limitation(s) further including sensors which collect data for the abstract idea, which is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1 and 11, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 7 and 17 recite(s) the additional limitation(s) further limits which users will receive the reaction plans which is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1 and 11, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 1-20 are therefore not eligible subject matter, even when considered as a whole.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-4, 8-11, 13-14, and 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Johnson (US PG Pub. 2016/0284038).
As per claims 1 and 11, Johnson discloses a method and system comprising: a processor; and a memory including instructions, that when executed by the processor perform operations comprising (system, processor, memory, Johnson ¶4; user interface, mobile computing device, ¶7);:
identifying, at a central service, a plurality of user devices to receive a reaction plan for an ongoing incident in an environment (The emergency responders 102a, 102b (sometimes referred to collectively as emergency responders 102) can respond to the emergency situation according to information, guidance, or both, provided from a centralized computing device (e.g., a server 100. The information can be information about one or more of the shelters or the potential or actual occupants of the shelter. The information can be useful to the emergency responders 102, e.g., in determining an appropriate response to the emergency situation. The emergency responders 102a, 102b can receive information from the central server 100 on respective computing devices 108a, 108b (sometimes referred to collectively as computing devices 108). The computing devices 108 can be mobile computing devices (e.g., a mobile telephone, a tablet, a wearable computing device such as a wrist-worn computing device or computer-enabled glasses, or another type of mobile computing device), medical devices (e.g., a defibrillator, such as a professional defibrillating system or an automated external defibrillator (AED)), computing devices housed in an ambulance or other vehicle (e.g., a dispatch computing device), or another type of computing device. The computing devices 108 can display the information provided from the central server 100, such as emergency response information about the emergency situation. In some cases, the computing devices 108 can display information indicative of guidance provided from the central server, such as an emergency response plan developed by the central server 100. The emergency responders 102 can interact with the computing devices 108 via user interfaces, such as a touchscreen display, a keyboard, or another type of user interface, through interactions such as voice commands, gestures, eye movements, or another type of interaction, Johnson ¶61; central server, ¶62);
customizing the reaction plan to produce a plurality of individualized versions, wherein each individualized version corresponds to an individual user device of the plurality of user devices and is customized based on a device type and formfactor of each individual user device and a role of a user associated with each individual user device in responding to the ongoing incident, (The processor and memory are configured to provide, to one of the computing devices, data indicative of (i) a location of a particular shelter and (ii) potential occupancy of the particular shelter responsive to a request received from the computing device. The processor and memory are configured to identify a particular shelter to which a particular rescuer can provide rescue services. The processor and memory are configured to identify the particular shelter based on a location of the computing device associated with the rescuer and the data indicative of the location of the particular shelter. The processor and memory are configured to identify the particular shelter based on one or more of an expertise of the rescuer and a piece of medical equipment available to the rescuer, Johnson ¶36-¶37; produce/prepares plan for responders, ¶105; basic user module, emergency responder module of app installed on user device/civilian devices, ¶108) (Examiner notes the produced/prepared plan for individual responders based upon skill sets, locations, priorities, and the basic user plans as the different types of individualized versions of reaction plans) wherein customizing the reaction plan includes;
generating a first graphical user interface (GUI) for a first user device of the plurality of user devices based on a type and formfactor of the first user device (graphical user interface, Johnson ¶149; An example notification may be that an emergency responder may indicate that a rescue operation has been successfully executed or that an emergency responder is experiencing an unforeseen delay. After receiving updates, the system continuously processes the new information and adjusts the emergency response plan as appropriate, ¶130; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual. The server receives more information in light of the emergency situation (415A). For instance, the information can include real time information about occupancy of a shelter or confirmation from emergency responders that they are prepared to respond to the emergency incidents. The server can receive current information regarding the emergency situation from news outlets, social media, and other public information outlets. The information may include occupant and shelter status, road closures, environmental and weather conditions, or other information, such as information that may facilitate an efficient emergency response, ¶126-¶127; The bottom of the display of the mobile device 108 is taken up by the map 326 which may be generated from a combination of data sources, e.g., using techniques such as those for creating mash-ups with GOOGLE MAPS™. For example, the central server may provide a latitude and longitude for the emergency responder 102a and a latitude and longitude for the shelter 106a, to a navigation system that is publicly available (via a published application programming interface (API)), and a navigation system may respond by providing data for drawing the map overlaid with a thick navigation route line for an optimal path between the two points for the emergency responder 102a. This path may be superimposed on the map 326 that responder 102a is shown. In addition, actual icons 135 and 147 are superimposed on the map to show the emergency responder 102a where relevant resources are located near their route between their current location and the shelter 106a. In this case, a hospital is represented by the icon 135, and a grocery store is represented by icon 147, ¶118; The processor and memory are configured to identify a particular shelter to which a particular rescuer can provide rescue services. The processor and memory are configured to identify the particular shelter based on a location of the computing device associated with the rescuer and the data indicative of the location of the particular shelter. The processor and memory are configured to identify the particular shelter based on one or more of an expertise of the rescuer and a piece of medical equipment available to the rescuer, ¶37; types of devices, ¶61; prioritize, ¶105-¶106) (Examiner notes the ability to provide navigation, itinerary, information based upon expertise and specific occupants/patients as the ability to provide individual customized reaction plans based upon formfactors of the devices);
generating a second GUI for a second user device of the plurality of user devices based on a type and formfactor of the second user device, wherein the second GUI includes a different amount data than the first GUI according to differing formfactors for the first user device and the second user device (graphical user interface, Johnson ¶149; An example notification may be that an emergency responder may indicate that a rescue operation has been successfully executed or that an emergency responder is experiencing an unforeseen delay. After receiving updates, the system continuously processes the new information and adjusts the emergency response plan as appropriate, ¶130; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual. The server receives more information in light of the emergency situation (415A). For instance, the information can include real time information about occupancy of a shelter or confirmation from emergency responders that they are prepared to respond to the emergency incidents. The server can receive current information regarding the emergency situation from news outlets, social media, and other public information outlets. The information may include occupant and shelter status, road closures, environmental and weather conditions, or other information, such as information that may facilitate an efficient emergency response, ¶126-¶127; The bottom of the display of the mobile device 108 is taken up by the map 326 which may be generated from a combination of data sources, e.g., using techniques such as those for creating mash-ups with GOOGLE MAPS™. For example, the central server may provide a latitude and longitude for the emergency responder 102a and a latitude and longitude for the shelter 106a, to a navigation system that is publicly available (via a published application programming interface (API)), and a navigation system may respond by providing data for drawing the map overlaid with a thick navigation route line for an optimal path between the two points for the emergency responder 102a. This path may be superimposed on the map 326 that responder 102a is shown. In addition, actual icons 135 and 147 are superimposed on the map to show the emergency responder 102a where relevant resources are located near their route between their current location and the shelter 106a. In this case, a hospital is represented by the icon 135, and a grocery store is represented by icon 147, ¶118; The processor and memory are configured to identify a particular shelter to which a particular rescuer can provide rescue services. The processor and memory are configured to identify the particular shelter based on a location of the computing device associated with the rescuer and the data indicative of the location of the particular shelter. The processor and memory are configured to identify the particular shelter based on one or more of an expertise of the rescuer and a piece of medical equipment available to the rescuer, ¶37; types of devices, ¶61; prioritize, ¶105-¶106) (Examiner notes the ability to provide navigation, itinerary, information based upon expertise and specific occupants/patients as the ability to provide individual customized reaction plans based upon formfactors of the devices).
transmitting a respective individualized version of the reaction plan to each corresponding individual user device of the plurality of user devices (The system processes the collected information and prepares an emergency response plan (420A). The emergency response plan may include notifications to users as well as plans for rescue operations. For instance, the system evaluates the information pertaining to each shelter and makes a determination of a priority for emergency response for each shelter based on that information. The evaluation and determination can account for attributes of the people located at each shelter. The system then assigns a highest priority shelter to each emergency responder. The system can consider one or more factors in assigning highest priority shelters, including the shelter status, the status of occupants of the shelter, emergency responder location and expertise, or environmental factors. The possible responders to emergency incidents may be determined based solely on distance from the locations of the emergency incidents, such as by circumscribing a circle of a particular radius around the event (where the circle can be made larger and larger until a predetermined number of potential responders can be found). The emergency responders may be selected based on their estimated time to respond, so that a system could look at the current speed of an emergency responder to infer that the emergency responder is in an automobile, and thus could arrive at the emergency incident more quickly than an emergency responder who is not currently moving. The server delivers the emergency response plan to emergency responders (425A). The system can selectively distribute information. The information may include some basic information about the emergency situation, updated information from shelters or shelter occupants, information about other emergency responders, or information needed to generate an annotated map (e.g., the map shown in FIG. 1B). Information delivered to responders may include routing details to their highest priority shelters. The system can send map data for transmission to the emergency responders, Johnson ¶128-¶129; see also In some examples, the user module can include a messages section that allows the central server 100 to communicate to users in a non-disaster situation. For example, the central server can communicate warnings of potential disasters or notify users of potential ways to prepare for disaster situations. In some implementations, the mobile devices of the users will have “push notifications” activated, and the central server can notify the users through the push notifications system of recommendations to be prepared for disasters. The central server can send out periodic newsletters with checklists and tips for disaster preparation. In some examples, prior to an emergency situation, the user module can enable periodic transfers of information from the computing/mobile devices 107, 108 to the central server 100 in order to preempt potential communication rifts between the mobile device and the central server in the event of a disaster. The mobile device can periodically send its location, so that if a disaster occurs and destroys communication means, the central server will have historical location information and be able to predict where the user might be, ¶114-¶115; bottom display is a map, with path, ¶117-¶118) and;
causing the first user device and the second user device to output the respective individualized version of the reaction plan (The system can selectively distribute information. The information may include some basic information about the emergency situation, updated information from shelters or shelter occupants, information about other emergency responders, or information needed to generate an annotated map (e.g., the map shown in FIG. 1B). Information delivered to responders may include routing details to their highest priority shelters. The system can send map data for transmission to the emergency responders, Johnson ¶128-¶129; itinerary for particular responder, ¶59; The central server 100 may receive additional information that may be sent out to all users equipped with the responder module regarding updated information on a particular shelter. For example, the dispatcher may have learned that the shelter 106a has a diabetic occupant or has other issues that the emergency responder 102a should be aware of. That information may be provided in the custom text area 324 so that the emergency responders can immediately see it if they are looking at their devices while they run or otherwise make their way toward the shelter 106a. Earlier text may be scrolled upward as additional text is added to the custom text area 324, and a “Communicate with Dispatchers” button 330 may also be shown so as to allow multiple-way communication among the dispatcher and the emergency responders. The emergency responders can request information from the central server via the button 330. The chat text may be populated directly by typing of various users, or by users speaking and their spoken words being converted into text. All such spoken communications between and among the people involved in the event may also be converted in a similar manner to be stored with a summary report for the event, Johnson ¶120).
As per claims 3 and 13, Johnson discloses as shown above with respect to claims 1 and 11. Johnson further discloses wherein the reaction plan is generated by central service: receiving, from a wireless communication device associated with a sensor deployed in the environment with a plurality of sensors, a status report; verifying a location of the sensor in the environment; updating a map of the environment with the status report; processing pending status reports, including the status report, to identify an incident flow; and generating the reaction plan based on the map and the incident flow (user communicate updates, Johnson ¶111; An example notification may be that an emergency responder may indicate that a rescue operation has been successfully executed or that an emergency responder is experiencing an unforeseen delay. After receiving updates, the system continuously processes the new information and adjusts the emergency response plan as appropriate, ¶130; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual. The server receives more information in light of the emergency situation (415A). For instance, the information can include real time information about occupancy of a shelter or confirmation from emergency responders that they are prepared to respond to the emergency incidents. The server can receive current information regarding the emergency situation from news outlets, social media, and other public information outlets. The information may include occupant and shelter status, road closures, environmental and weather conditions, or other information, such as information that may facilitate an efficient emergency response, ¶126-¶127; The bottom of the display of the mobile device 108 is taken up by the map 326 which may be generated from a combination of data sources, e.g., using techniques such as those for creating mash-ups with GOOGLE MAPS™. For example, the central server may provide a latitude and longitude for the emergency responder 102a and a latitude and longitude for the shelter 106a, to a navigation system that is publicly available (via a published application programming interface (API)), and a navigation system may respond by providing data for drawing the map overlaid with a thick navigation route line for an optimal path between the two points for the emergency responder 102a. This path may be superimposed on the map 326 that responder 102a is shown. In addition, actual icons 135 and 147 are superimposed on the map to show the emergency responder 102a where relevant resources are located near their route between their current location and the shelter 106a. In this case, a hospital is represented by the icon 135, and a grocery store is represented by icon 147, ¶118).
As per claims 4 and 14, Johnson discloses as shown above with respect to claims 1 and 11. Johnson further discloses wherein: the first user device of the plurality of user devices is associated with the environment by the central service and is not located in the environment (dispatch computing devices, Johnson ¶61); the second user device of the plurality of user devices is located in the environment and is not associated with the environment by the central service (nearby emergency responders, ¶111); and a third user device of the plurality of user devices is located in the environment and is associated with the environment by the central service (The central server 100 may receive additional information that may be sent out to all users equipped with the responder module regarding updated information on a particular shelter. For example, the dispatcher may have learned that the shelter 106a has a diabetic occupant or has other issues that the emergency responder 102a should be aware of. That information may be provided in the custom text area 324 so that the emergency responders can immediately see it if they are looking at their devices while they run or otherwise make their way toward the shelter 106a. Earlier text may be scrolled upward as additional text is added to the custom text area 324, and a “Communicate with Dispatchers” button 330 may also be shown so as to allow multiple-way communication among the dispatcher and the emergency responders. The emergency responders can request information from the central server via the button 330. The chat text may be populated directly by typing of various users, or by users speaking and their spoken words being converted into text. All such spoken communications between and among the people involved in the event may also be converted in a similar manner to be stored with a summary report for the event, Johnson ¶120).
As per claims 8 and 18, Johnson discloses as shown above with respect to claims 1 and 11. Johnson further discloses in response to receiving inputs from a user of the plurality of users or a sensor located in the environment: updating the reaction plan; and customizing and transmitting the reaction plan as updated to each user device of the plurality of user devices (user communicate updates, Johnson ¶111; An example notification may be that an emergency responder may indicate that a rescue operation has been successfully executed or that an emergency responder is experiencing an unforeseen delay. After receiving updates, the system continuously processes the new information and adjusts the emergency response plan as appropriate, ¶130; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual. The server receives more information in light of the emergency situation (415A). For instance, the information can include real time information about occupancy of a shelter or confirmation from emergency responders that they are prepared to respond to the emergency incidents. The server can receive current information regarding the emergency situation from news outlets, social media, and other public information outlets. The information may include occupant and shelter status, road closures, environmental and weather conditions, or other information, such as information that may facilitate an efficient emergency response, ¶126-¶127; The bottom of the display of the mobile device 108 is taken up by the map 326 which may be generated from a combination of data sources, e.g., using techniques such as those for creating mash-ups with GOOGLE MAPS™. For example, the central server may provide a latitude and longitude for the emergency responder 102a and a latitude and longitude for the shelter 106a, to a navigation system that is publicly available (via a published application programming interface (API)), and a navigation system may respond by providing data for drawing the map overlaid with a thick navigation route line for an optimal path between the two points for the emergency responder 102a. This path may be superimposed on the map 326 that responder 102a is shown. In addition, actual icons 135 and 147 are superimposed on the map to show the emergency responder 102a where relevant resources are located near their route between their current location and the shelter 106a. In this case, a hospital is represented by the icon 135, and a grocery store is represented by icon 147, ¶118).
As per claims 9 and 19, Johnson discloses as shown above with respect to claims 1 and 11. Johnson further discloses wherein the ongoing incident is identified by a first set of sensors deployed in the environment that are operated by a first party, the method further comprising: collecting data from a second set of sensors deployed outside of the environment that are operated by a second party; and generating the reaction plan according to the data from the first set of sensors deployed in the environment and the data collected from the second set of sensors (accelerometers, heart rate sensors, and altimeters, Johnson ¶100; The server receives a notification that an emergency situation has occurred (410A). Examples of disasters can include natural disasters, such as a severe storm, a hurricane, an earthquake; or human-caused disasters, such as a terrorist attack, a nuclear bomb strike, or a chemical spill. The emergency situation can be detected automatically via sensors or be determined by a notification from publically available information or from an individual, ¶126; see also ¶138).
As per claims 10 and 20, Johnson discloses as shown above with respect to claims 1 and 11. Johnson further discloses wherein the ongoing incident is identified by a first set of sensors deployed in the environment that are operated by a first party, the method further comprising: commanding at least a subset of the first set of sensors to change a data reporting rate based on a sensor type for the subset of the sensors and an incident type of the ongoing incident (While the emergency responder devices 108 are shown and described as a mobile devices in this example, it may take a variety of other forms. For example, the device could be a cellular telephone having text messaging capabilities, so that the user can receive direction via text message. The device could also be a portable networked device that does not have direct telephony capabilities such as an IPOD TOUCH media player or similar device. Other devices such as tablet PC's and other portable communication devices may also be used. In addition, certain of the functionality described above and below can be provided as part of an accessory to the relevant electronic communication, such as by placing a force sensor, temperature sensor, accelerometer, pulse sensor, blood pressure sensor, or other sensor in a plug-in module and/or jacket that a user can purchase for their electronic communication device (though such sensors may also be integrated into the device where appropriate). For example, a user of device may purchase an electronic stethoscope that may plug into device and act as a traditional stethoscope. A user may also purchase a blood-pressure cuff or other such sensors that may provide electronic signals to device so that readings of victim parameters, and particularly vital signs, may be uploaded easily to the rest of the system through the network, Johnson ¶138; automated detecting, ¶126) (Examiner interprets the ability to utilize different sensors as the ability to command changes in reporting rate i.e. when and how often to take a user’s vitals).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2, 5-6, 12, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Johnson (US PG Pub. 2016/0284038) and further in view of Goldstein et al. (US PG Pub. 2023/0417919).
As per claims 2 and 12, Johnson discloses as shown above with respect to claims 1 and 11. Johnson does not expressly disclose generating and populating a first sonic alert or instruction for output by the first user device of the plurality of user devices and no sonic alert for the second user device of the plurality of user devices.
However, Goldstein teaches generating and populating a first sonic alert or instruction for output by the first user device of the plurality of user devices and no sonic alert for the second user device of the plurality of user devices (Alternatively or additionally, processor 136 and/or remote device 140 may be configured to transmit alerts to users, such as silent alarms or other security data that a user may be able to utilize in taking action in response to a perceived or sensed security threat. Remote communication may further be used to contact law enforcement and/or security services which may coordinate their efforts with actions taken by apparatus 100. Security services may be provided with safety equipment and our overrides, so that they made either deactivate responses arrival area, and or maybe a responses by, for instance, eyewear having dichroic lenses or other optical elements to protect security personnel from directed light deterrent 156 actions or the like, Goldstein ¶296; ultrasonic, ¶236; With further reference to FIG. 1, multiple LRAD speakers, and/or sequential aiming of a single LRAD speaker may be used to send different audio signals to different subjects 308. This may be used to generate highly specific instructions such as directions to leave subject area which vary based on a position of a receiving person, and/or to generate distinct messages and/or sounds to distinct persons to cause further confusion and/or disorientation. For instance, a first subject 308 alerted by this means may hear a police siren, while a second subject 308 may hear a barking dog, a third a voice saying “hey,” or the like, ¶238; May generate alerts for officers operating the apparatus that a person represents a threat, may put a spot on the person a target, or a quote smart flashlight quote beam that tracks the person on them, or the like permitting an officer operating apparatus, as well as apparatus itself, to engage in more aggressive interdiction, arrest, or other countermeasures against the potential assailant, or criminal, ¶319; see also types of alerts, ¶661).
Both the Goldstein and Johnson references are analogous in that both are directed towards/concerned with responding to incidents, emergencies and/or threats. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Goldstein’s ability to calibrate responses based upon determined subjects entering into specific incident areas in Johnson’s system to improve the system and method with reasonable expectation that this would result in an emergency management system that is able to provide more warnings and/or instructions based upon the incident or subjected area.
The motivation being that there is a need for improved automated threat detection or to mitigate a situation (Goldstein ¶3-¶4 and ¶310).
As per claims 5 and 15, Johnson discloses as shown above with respect to claims 1 and 11. Johnson does not expressly disclose in response to detecting an unassociated user in the environment via one or more of a plurality of sensors deployed in the environment that is not associated with any user device of the plurality of user devices, transmitting the reaction plan to one or more sensors of the plurality of sensors within a predefined distance of the unassociated user for output into the environment.
However, Goldstein teaches in response to detecting an unassociated user in the environment via one or more of a plurality of sensors deployed in the environment that is not associated with any user device of the plurality of user devices, transmitting the reaction plan to one or more sensors of the plurality of sensors within a predefined distance of the unassociated user for output into the environment (detect entry of persons or animals into subject areas and respond consistently with determined behavior descriptors, object recognition, or rulesets using a graduated deterrence system. Embodiments may use a combination of imaging and other sensors, such as optical cameras, infrared cameras, 3D cameras, multispectral cameras, hyperspectral cameras, polarized cameras, chemical sensors, motion sensors, ranging sensors, light radar component, such as lidar, detection or imaging using radio frequencies component, such as radar, terahertz or millimeter wave imagers, seismic sensors, magnetic sensors, weight/mass sensors, ionizing radiation sensors, and/or acoustical sensors, to accurately recognize and spatially determine entrance into the subject area, to distinguish between known or whitelisted persons, children, animals, and potential threats. Embodiments may further distinguish casual or accidental intruders from those with more purposeful or malicious intent and may calibrate responses according both to detected behavior, imminence of threat, or other rulesets. Deterrent responses may be calibrated to detected behavior descriptors and rulesets so as to generate a graduated response that can escalate from warnings to irritating or off-putting responses or further to incapacitating responses as needed to achieve security objectives with a minimal harm to the intended target, Goldstein ¶55; seismic sensor in the area, ¶83; Apparatus may alternatively or additionally perform communication with subject via subject device, both when subject is in subject area and thereafter. For instance, apparatus may call or message subject via subject device to provide warnings or instructions thereto, ¶142; With further reference to FIG. 1, multiple LRAD speakers, and/or sequential aiming of a single LRAD speaker may be used to send different audio signals to different subjects 308. This may be used to generate highly specific instructions such as directions to leave subject area which vary based on a position of a receiving person, and/or to generate distinct messages and/or sounds to distinct persons to cause further confusion and/or disorientation. For instance, a first subject 308 alerted by this means may hear a police siren, while a second subject 308 may hear a barking dog, a third a voice saying “hey,” or the like, ¶238).
Both the Goldstein and Johnson references are analogous in that both are directed towards/concerned with responding to incidents, emergencies and/or threats. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Goldstein’s ability to calibrate responses based upon determined subjects entering into specific incident areas in Johnson’s system to improve the system and method with reasonable expectation that this would result in an emergency management system that is able to provide more warnings and/or instructions based upon the incident or subjected area.
The motivation being that there is a need for improved automated threat detection or to mitigate a situation (Goldstein ¶3-¶4 and ¶310).
As per claims 6 and 16, Johnson and Goldstein disclose as shown above with respect to claims 5 and 15. Goldstein further teaches wherein the unassociated user is non-human (animals, Goldstein ¶55; subject is animal that has been introduced into subject area, ¶91; non-human animals, ¶150).
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Johnson (US PG Pub. 2016/0284038).
As per claims 7 and 17, Johnson discloses as shown above with respect to claims 1 and 11. While Johnson further discloses the ability to only provide plans/updates/maps to users with in a location and/or time frame of an incident (The systems and techniques described here can also be employed in a situation in which there are mass casualties, such as a large accident (e.g., train, bus, or airplane crash), a terrorist attack (e.g., bombing or gas dispersion), natural disaster (e.g., earthquake or large tornado), or public health crisis. For example, the system may initially determine manually, automatically, or semi-automatically, that an emergency situation in the form of a mass event is occurring, such as by determining that a number of calls for different events (e.g., different people in need of held) over a time period (e.g., 1 hour) and in a particular area has exceeded a predetermine threshold. Such a determination may be made by using known automatic clustering analysis techniques so as to distinguish calls that are connected as shown by time and geographic location, from those that are independent but happen to be spatially and time related. Upon the condition of a mass event being recognized, the system may generate an alarm via the communications systems to begin elevating the status of the event. For example, the alarm may first be provided to a dispatcher or dispatchers who may attempt to confirm the nature of the event, such as by asking a victim at the scene a series of questions designed to elicit information about the type of event and scope of the event. At a next stage, the emergency responder system described herein is executed, Johnson ¶140), Johnson does not expressly disclose wherein the reaction plan as customized is a refusal reaction plan for a given user who is not located in the environment or associated with the environment by the central service that actively prevents communication between a given user device associated with the given user and the central service while the given user device is within a predefined distance of the environment as this is a recognized equivalence for the same purpose (MPEP 2144.06).
However, the Examiner notes that one of ordinary skill in the art, before the effective filing date of the invention, would have recognized the ability to provide plans/updates/maps to users with in a location and/or time frame of an incident as wherein the reaction plan as customized is a refusal reaction plan for a given user who is not located in the environment or associated with the environment by the central service that actively prevents communication between a given user device associated with the given user and the central service while the given user device is within a predefined distance of the environment.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include wherein the reaction plan as customized is a refusal reaction plan for a given user who is not located in the environment or associated with the environment by the central service that actively prevents communication between an given user device associated with the given user and the central service while the given user device is within a predefined distance of the environment in the system of Johnson as the system would yield predictable results.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to ANDREW B WHITAKER whose telephone number is (571)270-7563. The examiner can normally be reached on M-F, 8am-5pm, EST.
If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-
automated- interview-request-air-form
/ANDREW B WHITAKER/Primary Examiner, Art Unit 3629