Detailed Action
This communication is in response to the Application filed on 5/13/2024.
Claims 1-20 are pending and have been examined.
Independent Claims 1, 10, and 19 are Method, Storage, and System claims, respectively.
Apparent priority: 5/12/2023.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 20 is objected to because of the following informalities: Claim 20 recites “The system of claim 10,” claim 10 is “A non-transitory computer-readable storage medium”. Examiner suggests “The system of claim 19” or “The non-transitory computer-readable storage medium of claim 10” Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The independent Claims are directed to statutory categories:
Claim 1 is a method claim and directed to the machine or manufacture category of patentable subject matter.
Claim 10 is a storage medium claim and is directed to the machine or manufacture category of patentable subject matter.
Claim 19 is a system claim and is directed to the machine or manufacture category of patentable subject matter.
Independent claim 1 recites,
“1. A method for resolving commands for responding to an ongoing situation using machine learning, the method comprising:
ingesting situational data from multiple data sources,
wherein the situational data comprises image-based situational data and natural language data that relates to an ongoing situation,
and wherein at least a portion of the situational data relates to response activities of a responder team for the ongoing situation; (This relates to a human using the five senses to ingest data from multiple sources.)
generating, by an ensemble machine learning model comprising at least first model and a second model, recommended commands by: (This relates to a human generating commands using speech, or using pen and paper or using hand signals.)
recognizing, via the first model using the ingested situational data, state information about the ongoing situation; (This relates to a human using logic and reasoning in the human mind to recognize state information given a situation.)
generating, via the second model, the recommended commands, (This relates to a human generating commands using speech, or using pen and paper or using hand signals.)
wherein, the second model comprises a generative natural language model configured to compare
a) at least a portion of the situational data and the recognized state information; and
b) plan data for the ongoing situation that comprises template commands, and (This relates to a human comparing data to state information or plan data of template commands using logic and reasoning in the human mind.)
the second model generates the recommended commands based on the comparison; and (This relates to a human generating commands using speech, or using pen and paper or using hand signals.)
providing the recommended commands to a member of the responder team. (This relates to a human providing commands using speech or pen and paper.)
The Dependent Claims do not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea.
Regarding Independent Claim 10, claim 10 is a computer storage medium claim with limitations similar to that of Claim 1 and is rejected under the same rational.
Regarding Independent Claim 19, claim 19 is a system claim with limitations similar to that of Claim 1 and is rejected under the same rational.
This judicial exception is not integrated into a practical application. In particular, claim 1 recites additional elements of “storage” and “processor” For example, in [0031] of the as filed specification, there is description of using In some embodiments, memory 216, processor 212, and/or database 214 are be distributed (across multiple devices or computers that represent system 200). In one embodiment, system 200 may be part of a computing device (e.g., smartphone, tablet, computer, and the like). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a device is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible.
Dependent claim 2 recites,
“2. The method of claim 1, wherein the second model selects, based on the comparison, one or more template commands that match the portion of the situational data and the recognized state information, (This relates to a human selecting template commands using speech, or using pen and paper or using hand signals.)and the second model generates the recommended commands using the matching one or more template commands. (This relates to a human generating commands using speech, or using pen and paper or using hand signals.)
Dependent claim 3 recites,
“3. The method of claim 2, wherein the second model generates at least a portion of the recommended commands by editing, augmenting, or rewriting the matching one or more template commands using the portion of the situational data and the recognized state information. (This relates to a human generating, editing, augmenting, or rewriting commands using speech, or using pen and paper or using hand signals.)
Dependent claim 4 recites,
“4. The method of claim 1, wherein the plan data comprises the template commands and descriptive information that describes a context for the template commands, and the second model generates the recommended commands by selecting one or more of the template commands that match the portion of the situational data and state information.” (This relates to a human generating, and selecting commands using speech, or using pen and paper or using hand signals.)
Dependent claim 5 recites,
“5. The method of claim 1, wherein the plan data comprises a graph of template commands and links among the template commands, the graph stores context for each template command, and the second model generates the recommended commands by a) comparing the portion of the situational data and state information to the graph and b) selecting matching template commands. (This relates to a human storing contextual information in the human mind or using pen and paper, generating, commands using speech, or using pen and paper or using hand signals, comparing data in the human mind using logic and reasoning and selecting using speech.)
Dependent claim 6 recites,
“6. The method of claim 5, wherein the context for a given template command comprises tags of: predefined situational data associated with the given template command; predefined state information associated with the given template command, or any combination thereof. (This relates to a human tagging information tagging information using speech or pen and paper.)
Dependent claim 7 recites,
7. The method of claim 1, wherein the ongoing situation comprise a wildfire, one or more violent individuals, a riot, a weather event, a global relief effort, a military or police event, or any combination thereof. (This relates to situational data a human can recognize.)
Dependent claim 8 recites,
8. The method of claim 1, wherein additional situational data is ingested from the multiple data sources at a point in time after the situational data is ingested, (This relates to a human ingesting multiple data sources using the five senses.)
and wherein the method further comprises:
compiling, by a feedback manager, training instances of: state information recognized by the first model that is inaccurate; and/or recommended commands generated by the second model that are not performed by the responder team, (This relates to human using pen and paper or the human mind to compile data.)
wherein the feedback manager processes the additional situational data to compile the training instances; and updating a training of the first model and/or the second model using at least the training instances. (This relates to human processing data in the human mind or using pen and paper in order to compile the data and update the model using pen and paper or the human mind.)
Dependent claim 9 recites,
9. The method of claim 1, wherein the ingested image-based situational data comprises descriptions of images or video data related to the ongoing situation, and the ingested natural language data comprises transcripts of conversations among individuals related to the ongoing situation. (This relates background informational data that a human can observe using vision.)
As to dependent Claim 11, Claim 11 is a parallel storage medium claim with limitations similar to that of Claim 2 and is rejected under the same rational.
As to dependent Claim 12, Claim 12 is a parallel storage medium claim with limitations similar to that of Claim 3 and is rejected under the same rational.
As to dependent Claim 13, Claim 13 is a parallel storage medium claim with limitations similar to that of Claim 4 and is rejected under the same rational.
As to dependent Claim 14, Claim 14 is a parallel storage medium claim with limitations similar to that of Claim 5 and is rejected under the same rational.
As to dependent Claim 15, Claim 15 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rational.
As to dependent Claim 16, Claim 16 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rational.
As to dependent Claim 17, Claim 17 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rational.
As to dependent Claim 18, Claim 18 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rational.
As to dependent Claim 20, Claim 20 is a parallel system claim with limitations similar to that of Claim 1 and is rejected under the same rational.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Teran Matus (U.S. Patent Number US 20210279603 A1), in view of Roh (U.S. Patent Number US 20230270562 A1).
Regarding independent Claim 1, Teran Matus teaches
1. A method for resolving commands for responding to an ongoing situation using machine learning, the method comprising: ingesting situational data from multiple data sources, wherein the situational data comprises image-based situational data and natural language data that relates to an ongoing situation, and wherein at least a portion of the situational data relates to response activities of a responder team for the ongoing situation; (see Teran Matus Figure 2 element 218, see [0017] “In an aspect, the described system is trained using labeled training data derived from previously saved data corresponding to special circumstances that have been identified and documented. To illustrate, the labeled training data may include video footage of a person carrying (or concealed carrying) a weapon, video/images of persons in a criminal database or in video footage captured near a scene of interest, sound of weapons being used, explosions, people reacting to weapon use or other events (e.g., screaming), a fire detected by infrared sensors, social media posts or news posts describing criminal activity, sensor data captured during a particular event, emergency call center (e.g., “911” in the U.S.) transcripts or audio, etc. In an aspect, the system uses cognitive algorithms to “learn” what makes a circumstance of interest, and the system's learning is reinforced by human feedback that confirms whether an identification output by the system was accurate (e.g., was an event that needed to be highlighted and analyzed further).”)
generating, by an ensemble machine learning model comprising at least first model and a second model, recommended commands by: (see Teran Matus Figure 2 element 220, see [0059] “In some implementations, the computing device(s) 306 generate output based on the event classification data. For example, one or more of the alarms 138 of FIG. 1 may be generated when the event classification data indicates that a particular type of event is detected in the datasets 304. In some implementations, the event classification data may be used to select a particular one of the event response models 328 to execute to generate a response recommendation (e.g., one of the recommendations 132 of FIGS. 1 and 2) or to select a response action. For example, each event response model 328 may be configured or trained to generate a response recommendation for a particular type of event or a particular set of types of events. To illustrate, a first event response model may be configured to generate response recommendations for structure fire events, and a second event response model may be configured to generate response recommendations for robberies.”) recognizing, via the first model using the ingested situational data, state information about the ongoing situation; generating, via the second model, the recommended commands, (see Teran Matus [0060] “During execution of an event response model, a portion of the digest data, a portion of the raw data from the datasets 304, or both, may be provided as input to the event response model. The event response models 328 can include heuristic rules, machine learning models, or both. For example, certain response actions can be generated based on rules that map particular event types to corresponding actions, such as a command 342 transmitted by the interface(s) 308 to dispatch one or more unmanned systems 340 (e.g., monitoring drones) to an area associated with a particular type of event. Other response actions can be determined using a machine learning model to predict an appropriate response action. For example, the machine learning model can include a neural network, a decision tree, or another machine learning model trained to select a response action that is most likely to achieve one or more results, such as minimizing or reducing causalities, minimizing or reducing property loss, optimal or acceptable use of resources, or combinations thereof. In some implementations, an event response model 328 performs a response simulation for a particular type of event (e.g., based on a time and location associated with the event, available resources, historical responses, etc.) to select the response action taken or recommended. For some event types, one or more response actions may be selected based on heuristic rules and one or more additional response action may be selected based on response simulation. To illustrate, when a structure fire event is detected, a nearest available fire response team may be automatically dispatched to the structure fire based on a heuristic rule. In this illustrative example, a machine learning-based event response model can be executed, using available data, to project whether one or more additional fire response teams or other resources (e.g., police) should also be dispatched.”) wherein, the second model comprises a generative natural language model configured to compare (see Teran Matus [0053] The data reduction models 322 include machine learning models that are trained to generate digest data based on the datasets 304. In this context, digest data refers to information that summarizes or represents at least a portion of one of the datasets 304. For example, digest data can include keywords derived from natural language text or audio data; descriptors or identifiers of features detected in image data, video data, audio data, or sensor data; or other summarizing information.”)
Teran Matus does not specifically teach a) at least a portion of the situational data and the recognized state information; and b) plan data for the ongoing situation that comprises template commands, and the second model generates the recommended commands based on the comparison; and providing the recommended commands to a member of the responder team. However, Roh does teach this limitation (see Roh [0034] A surgical workflow is generated for a surgical robot. The surgical workflow comprises workflow objects for the surgical procedure based on the surgical actions. The surgical workflow is adjusted based on a comparison of the surgical workflow to stored historical workflows. The surgical robot is configured with the adjusted workflow comprising the workflow objects and information describing the surgical actions. The surgical robot performs the surgical actions on the patient using the adjusted workflow.”) (see Roh [0039] In embodiments, a robotic surgical system uses machine learning (ML) to provide recommendations and methods for automated robotic ankle arthroscopic surgery. Historical patient data is filtered to match particular parameters of a patient. The parameters are correlated to the patient. A robotic surgical system or a surgeon reviews the historical patient data to select or adjust the historical patient data to generate a surgical workflow for a surgical robot for performing the robotic arthroscopic surgery.”) (see Roh [0042] In some embodiments, the disclosed systems can perform an arthroscopic surgical procedure on a joint of a patient. The system can acquire data (e.g., user input, patient data, etc.) from user interfaces and storage devices. An ML algorithm can analyze the patient data to determine one or more ligament-attachment joint stabilization steps for the joint. The system can generate a robotic-enabled surgical plan for the joint based on the user input and the one or more ligament-attachment joint stabilization steps. In some implementations, the robotic-enabled surgical plan includes a sequence of surgical steps with corresponding surgical tools for attaching one or more connectors to at least one ligament of the joint and another structure of the patient to promote stabilization of the joint. A graphical user interface (GUI) can display the robotic-enabled surgical plan for intraoperative viewing by a user (e.g., healthcare provider) while the robotic surgical system robotically operates on the patient. The system can receive, from the user, intraoperative user input associated with one or more of the surgical steps of the robotic-enabled surgical plan. The system determines information to be displayed, via the GUI, based on the received intraoperative user input while controlling one or more of the tools operated by the robotic surgical system according to a selection.”)
Teran Matus and Roh are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Teran Matus to incorporate a) at least a portion of the situational data and the recognized state information; and b) plan data for the ongoing situation that comprises template commands, and the second model generates the recommended commands based on the comparison; and providing the recommended commands to a member of the responder team of Roh. This allows for improved chances of success of detailed steps of an event procedure as recognized by Roh [0046].
Regarding Independent Claim 10, claim 10 is a computer storage medium claim with limitations similar to that of Claim 1 and is rejected under the same rational. Additionally, Teran Matus teaches 10. A non-transitory computer-readable storage medium for resolving commands for responding to an ongoing situation using machine learning, the computer-readable storage medium storing instructions that, when executed by a computing system, cause a computing system to: (see Teran Matus [0145] 27. “…A non-transitory computer-readable storage device”) (see Teran Matus [0140] “The systems and methods of the present disclosure may take the form of or include a computer program product on a computer-readable storage medium or device having computer-readable program code (e.g., instructions) embodied or stored in the storage medium or device. Any suitable computer-readable storage medium or device may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, and/or other storage media. As used herein, a “computer-readable storage medium” or “computer-readable storage device” is not a signal.”)
Regarding Independent Claim 19, claim 19 is a system claim with limitations similar to that of Claim 1 and is rejected under the same rational. Additionally, Teran Matus teaches 19. A computing system for resolving commands for responding to an ongoing situation using machine learning, the computing system comprising: one or more processors; and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to: (see Teran Matus [0142] “Computer program instructions may be loaded onto a computer or other programmable data processing apparatus to produce a machine, such that the instructions that execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory or device that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.”)
As to Claim 2, Teran Matus in view of Roh teaches 2. The method of claim 1,
Furthermore, Roh teaches wherein the second model selects, based on the comparison, one or more template commands that match the portion of the situational data and the recognized state information, and the second model generates the recommended commands using the matching one or more template commands. (see Roh [0034] “A surgical workflow is generated for a surgical robot. The surgical workflow comprises workflow objects for the surgical procedure based on the surgical actions. The surgical workflow is adjusted based on a comparison of the surgical workflow to stored historical workflows. The surgical robot is configured with the adjusted workflow comprising the workflow objects and information describing the surgical actions. The surgical robot performs the surgical actions on the patient using the adjusted workflow.”) (see Roh [0039] “In embodiments, a robotic surgical system uses machine learning (ML) to provide recommendations and methods for automated robotic ankle arthroscopic surgery. Historical patient data is filtered to match particular parameters of a patient. The parameters are correlated to the patient. A robotic surgical system or a surgeon reviews the historical patient data to select or adjust the historical patient data to generate a surgical workflow for a surgical robot for performing the robotic arthroscopic surgery.”) (see Roh [0042] In some embodiments, the disclosed systems can perform an arthroscopic surgical procedure on a joint of a patient. The system can acquire data (e.g., user input, patient data, etc.) from user interfaces and storage devices. An ML algorithm can analyze the patient data to determine one or more ligament-attachment joint stabilization steps for the joint. The system can generate a robotic-enabled surgical plan for the joint based on the user input and the one or more ligament-attachment joint stabilization steps. In some implementations, the robotic-enabled surgical plan includes a sequence of surgical steps with corresponding surgical tools for attaching one or more connectors to at least one ligament of the joint and another structure of the patient to promote stabilization of the joint. A graphical user interface (GUI) can display the robotic-enabled surgical plan for intraoperative viewing by a user (e.g., healthcare provider) while the robotic surgical system robotically operates on the patient. The system can receive, from the user, intraoperative user input associated with one or more of the surgical steps of the robotic-enabled surgical plan. The system determines information to be displayed, via the GUI, based on the received intraoperative user input while controlling one or more of the tools operated by the robotic surgical system according to a selection.”)
Teran Matus in view of Roh are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Teran Matus and Roh to incorporate wherein the second model selects, based on the comparison, one or more template commands that match the portion of the situational data and the recognized state information, and the second model generates the recommended commands using the matching one or more template commands of Roh. This allows for improved chances of success of detailed steps of an event procedure as recognized by Roh [0046].
As to Claim 3 Teran Matus in view of Roh teaches 3. The method of claim 2,
Furthermore, Teran Matus teaches wherein the second model generates at least a portion of the recommended commands by editing, augmenting, or rewriting the matching one or more template commands using the portion of the situational data and the recognized state information. (see Teran Matus [0026] In a particular aspect, the models utilized by the described system are trained, at least in part, based on trained event libraries (TELs). TELs may be general or may be specific to particular types of events, geographic areas, etc. To illustrate, a TEL used to train a security system for use in one part of the world may assign a high degree of suspicion to a person carrying an open flame torch, whereas a different TEL for a different part of the world may assign little meaning to such an event when analyzing the context and circumstances. Conversely, certain things may be universal from a security standpoint (e.g., a firearm being fired). TELs can be created that contain the training for specific events. These TELs may be exported, imported, combined, enhanced, added, deleted, exchangeable, etc.”) (see Teran Matus [0045] Machine learning algorithms and models 122 perform holistic analysis of the input signals 102 to detect, identify, and respond to events. Video may be analyzed to identify events, behaviors, objects, faces, etc. using models (e.g., video analysis models 112) trained on TELs. A face recognition model 114 can compare faces detected in the video with law enforcement databases (e.g., a criminals database 108) and, optionally, alternate databases 116 that supplement law enforcement databases (e.g., if law enforcement databases do not reveal a face match, images posted to various social media sites 118 may be searched for a face match). The system 100 optionally may create the alternate database 116 where it will store the faces or other means of identification of people involved directly or indirectly in a crime or relevant event and that probably are or are not stored in the criminal databases 108 in order to identify and locate these people later. Other data sources 120, including sensors 110, ambient environment characteristics, social media posts, structured data, legacy system databases, and Internet data 118, etc. may be used as further inputs to refine event detection (e.g., influence a confidence value output by the model for the detected event). New TELs 124 may also be created (or existing TELs may be augmented) based on some or all of the input signals 102. In some cases, other adjustments may be received from different instances of the system, TELs, etc.”)
As to Claim 4 Teran Matus in view of Roh teaches 4. The method of claim 1,
Furthermore, Teran Matus teaches wherein the plan data comprises the template commands and descriptive information that describes a context for the template commands, and the second model generates the recommended commands by selecting one or more of the template commands that match the portion of the situational data and state information. (see Teran Matus [0026] In a particular aspect, the models utilized by the described system are trained, at least in part, based on trained event libraries (TELs). TELs may be general or may be specific to particular types of events, geographic areas, etc. To illustrate, a TEL used to train a security system for use in one part of the world may assign a high degree of suspicion to a person carrying an open flame torch, whereas a different TEL for a different part of the world may assign little meaning to such an event when analyzing the context and circumstances. Conversely, certain things may be universal from a security standpoint (e.g., a firearm being fired). TELs can be created that contain the training for specific events. These TELs may be exported, imported, combined, enhanced, added, deleted, exchangeable, etc.”) (see Teran Matus [0045] Machine learning algorithms and models 122 perform holistic analysis of the input signals 102 to detect, identify, and respond to events. Video may be analyzed to identify events, behaviors, objects, faces, etc. using models (e.g., video analysis models 112) trained on TELs. A face recognition model 114 can compare faces detected in the video with law enforcement databases (e.g., a criminals database 108) and, optionally, alternate databases 116 that supplement law enforcement databases (e.g., if law enforcement databases do not reveal a face match, images posted to various social media sites 118 may be searched for a face match). The system 100 optionally may create the alternate database 116 where it will store the faces or other means of identification of people involved directly or indirectly in a crime or relevant event and that probably are or are not stored in the criminal databases 108 in order to identify and locate these people later. Other data sources 120, including sensors 110, ambient environment characteristics, social media posts, structured data, legacy system databases, and Internet data 118, etc. may be used as further inputs to refine event detection (e.g., influence a confidence value output by the model for the detected event). New TELs 124 may also be created (or existing TELs may be augmented) based on some or all of the input signals 102. In some cases, other adjustments may be received from different instances of the system, TELs, etc.”)
As to Claim 5 Teran Matus in view of Roh teaches 5. The method of claim 1,
Furthermore, Roh teaches wherein the plan data comprises a graph of template commands and links among the template commands, the graph stores context for each template command, and the second model generates the recommended commands by a) comparing the portion of the situational data and state information to the graph and b) selecting matching template commands. (see Roh [0308] “In some embodiments, data used or updated by one or more operations described in this disclosure may be stored in a set of databases 1030. In some embodiments, the server 1002, the wearable device 1004, the set of external displays 1005, or other computer devices may access the set of databases to perform one or more operations described in this disclosure. For example, a prediction model used to determine ocular information may be obtained from a first database 1031, where the first database 1031 may be used to store prediction models or parameters of prediction models. Alternatively, or in addition, the set of databases 1030 may store feedback information collected by the wearable device 1004 or results determined from the feedback information. For example, a second database 1032 may be used to store a set of user profiles that include or link to feedback information corresponding with eye measurement data for the users identified by the set of user profiles. Alternatively, or in addition, the set of databases 1030 may store instructions indicating different types of testing procedures. For example, a third database 1033 may store a set of testing instructions that causes a first stimulus to be presented on the wearable device 1004, then causes a second stimulus to be presented on a first external display 1005a, and thereafter causes a third stimulus to be presented on a second external display 1005b.”)
Teran Matus in view of Roh are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Teran Matus and Roh to incorporate the plan data comprises a graph of template commands and links among the template commands, the graph stores context for each template command, and the second model generates the recommended commands by a) comparing the portion of the situational data and state information to the graph and b) selecting matching template commands of Roh This allows for improved chances of success of detailed steps of an event procedure as recognized by Roh [0046].
As to Claim 6 Teran Matus in view of Roh teaches 6. The method of claim 5,
Furthermore, Roh teaches wherein the context for a given template command comprises tags of: predefined situational data associated with the given template command; predefined state information associated with the given template command, or any combination thereof. (see Roh [0112] In some embodiments, the ML system 200 trains the ML model 216, based on the training data 220, to correlate the feature vector 212 to expected outputs in the training data 220. As part of the training of the ML model 216, the ML system 200 forms a training set of features and training labels by identifying a positive training set of features that have been determined to have a desired property in question, and, in some embodiments, forms a negative training set of features that lack the property in question.”) (see Roh [0308] “In some embodiments, data used or updated by one or more operations described in this disclosure may be stored in a set of databases 1030. In some embodiments, the server 1002, the wearable device 1004, the set of external displays 1005, or other computer devices may access the set of databases to perform one or more operations described in this disclosure. For example, a prediction model used to determine ocular information may be obtained from a first database 1031, where the first database 1031 may be used to store prediction models or parameters of prediction models. Alternatively, or in addition, the set of databases 1030 may store feedback information collected by the wearable device 1004 or results determined from the feedback information. For example, a second database 1032 may be used to store a set of user profiles that include or link to feedback information corresponding with eye measurement data for the users identified by the set of user profiles. Alternatively, or in addition, the set of databases 1030 may store instructions indicating different types of testing procedures. For example, a third database 1033 may store a set of testing instructions that causes a first stimulus to be presented on the wearable device 1004, then causes a second stimulus to be presented on a first external display 1005a, and thereafter causes a third stimulus to be presented on a second external display 1005b.”)
Teran Matus in view of Roh are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Teran Matus and Roh to incorporate the context for a given template command comprises tags of: predefined situational data associated with the given template command; predefined state information associated with the given template command, or any combination thereof of Roh. This allows for improved chances of success of detailed steps of an event procedure as recognized by Roh [0046].
As to Claim 7 Teran Matus in view of Roh teaches 7. The method of claim 1,
Furthermore, Teran Matus teaches wherein the ongoing situation comprise a wildfire, one or more violent individuals, a riot, a weather event, a global relief effort, a military or police event, or any combination thereof. (see Teran Matus [0036] In an example, the described zone-driven system (that may be receiving as an input the result of the event-driven system described above and/or additionally receiving input based on emergency calls, police reports, internet data and/or other sources) analyzes what is happening in a zone as well as in the zones around that zone. The zone-driven system may analyze the resources available and features of the available resources. Based on what resources are available, the zone-driven system may make a recommendation regarding how to use those resources, in consideration of what is happening in multiple zones and the predictions in those multiple zones. Hence, in such an example, the zone-driven system may not suggest an action based on just a single event, but rather based on numerous events happening in the zone and surrounding zones of interest and based on available resources. The zone-driven system can also make recommendations for resources needed in the long run and can provide support information based on what is happening (at a given time) to justify the acquisition of more assets, technologies, hiring of more personnel (e.g., police), or implementing certain training to personnel.”) (see Teran Matus [0034] “According to a particular aspect, the system evaluates a risk index against the resources in and around each zone within the monitored geographic or virtual area. When the available resources are predicted to be inadequate to respond to an event (e.g., resources are insufficient, underutilized, over utilized, etc.) in the short, medium, and/or long term, the system generates alerts. Such alerts may be classified by multiple parameters of relevance and urgency (as in the case for the above-described event-driven system). Different level alerts may be communicated to different individuals, systems or subsystems for follow-up action, such as need of resource relocation, resource deployment, resource acquisition, resource reassignment, etc. In some examples, the system considers distance and duration of travel with respect to resources from surrounding zones in determining whether sufficient resources are available to respond to a particular event under different environmental (e.g., weather) scenarios. Thus, the system may generally, in view of the determined risk indexes for various zones, analyze available resources capabilities and features of the resources, distances between zones, environmental conditions, risk index trends of zones, and per-zone resources need predictions. The system may propose one or more solutions to address the predictions for the short term and may optionally recommend other changes or acquisitions for the medium or long term. Depending on implementation, the system may utilize genetic algorithms, heuristic algorithms, and/or machine learning models during operation.”)
As to Claim 8 Teran Matus in view of Roh teaches 8. The method of claim 1,
Furthermore, Teran Matus teaches wherein additional situational data is ingested from the multiple data sources at a point in time after the situational data is ingested, and wherein the method further comprises: (see Teran Matus [0014] According to particular aspects, public safety systems can be improved by using artificial intelligence (AI) to analyze various types and modes of input data in a holistic fashion. For example, video camera output can be analyzed using AI models to identify suspicious objects left unattended in places (e.g., airports), people or objects in a “wrong” or prohibited place or time, etc. Accomplishments in deep learning and improved computing capabilities enable some systems to go a step further. For example, in a particular aspect, a system can identify or predict very specific events based on multiple and distinct data sources that generate distinct types of data. As another example, events and event responses can be simulated using complex reasoning based on available evidence. Notifications regarding identified or predicted events can be issued to relevant personnel and automated systems. Furthermore, remedial actions can be recommended, or in some cases, automatically initiated using automated response system, such as unmanned vehicles.”) (see Teran Matus [0022] “The disclosed system may also analyze other types of data. For example, the system may search public and private sources, such as the internet (e.g., social media or other posts, real-time news, dark web, etc.), for information regarding events in a geographical region of interest, interpret the data in context and “give meaning” to the data, classify the data, and assign a credibility index as well as weight the data with multiple relevance parameters (e.g., dangerousness, alarm, importance, etc.). The system may also automatically send reports or notifications regarding such events to users configured to receive such notifications. The system may generate recommendations regarding response actions and resource allocations/deployments. In some examples, the system can provide post-event information that can assist an investigation, searching the internet for relevant data related to an event that occurred within the monitored geographical or virtual area, etc.”) compiling, by a feedback manager, training instances of: state information recognized by the first model that is inaccurate; and/or recommended commands generated by the second model that are not performed by the responder team, wherein the feedback manager processes the additional situational data to compile the training instances; and updating a training of the first model and/or the second model using at least the training instances. (see Teran Matus [0038] “While several of the foregoing aspects are described with reference to security, it is to be understood that the techniques of the present disclose can also be used in other contexts. As a first example, the system can be used before, during and after a natural disaster, such as an earthquake. Prior to the occurrence of an earthquake, the system can evaluate zones that were more severely and/or commonly damaged by previous earthquakes, improvements (e.g., building code/structural improvements) made since the last earthquake and expected results from such improvements, available resources, resource distribution, etc. The system may suggest re-distribution of resources or suggest reinforcement in certain areas that the executed machine learning models predict as having a higher risk index for an earthquake. During an earthquake, the system can receive inputs from diverse sources including video, unmanned vehicles, sensors, emergency calls, rescue teams, and other sources and dynamically recommend resource allocation/distribution to assist with search and rescue operations. After the earthquake response is completed, the system can document the efficacy of its recommended actions and generate training data that can be used to train subsequent generation(s) of models so that future event detections/classifications, risk index predictions, and suggested actions are more accurate and effective.”)
As to Claim 9 Teran Matus in view of Roh teaches 9. The method of claim 1,
Furthermore, Teran Matus teaches wherein the ingested image-based situational data comprises descriptions of images or video data related to the ongoing situation, and the ingested natural language data comprises transcripts of conversations among individuals related to the ongoing situation. (see Teran Matus [0051] The data sources 302 can include public sources (e.g., internet-based data sources), private sources (e.g., local sensor, proprietary databases/systems, legacy systems databases), government sources (e.g., emergency call center transcripts), or a combination thereof. Further, in some implementations, one or more of the data sources 302 may be integral to the computing device(s) 306. For example, the computing device(s) 306 include one or more memory devices 310, which may store a database that includes one of the datasets 304.”) (see Teran Matus [0054] “Generally, each data reduction model is configured to process a corresponding data type, structured or unstructured. For example, a first data reduction model may include a natural language processing model trained or configured to extract terms of interest (e.g., keywords) from text, such as social media posts, news articles, transcripts of audio data (which may be generated by the speech recognition instructions or another transcription source), etc. In this example, a second data reduction model may include a classifier or a machine learning model that is trained to generate a descriptor based on features extracted from a sensor data stream. Further, in this example, a third data reduction model may include an object detection model trained or configured to detect particular objects, such as weapons, in image data or video data and to generate an identifier or a descriptor of the detected object. In some implementations, a fourth data reduction model may include face recognition model trained or configured to distinguish human faces in image data or video data and to generate a descriptor (e.g., a name and/or other data, such as a prior criminal history) of a detected person. Other examples of data reduction models 322 include vehicle recognition models that generate descriptors of detected vehicles (e.g., color, make, model, and/or year of a vehicle), license plate reader models that generate license plate numbers based on license plates detected in images or video, sound recognition models that generate descriptors of recognized sounds (e.g., gunshots, shouts, alarm claxons, car horns), meteorological models that generate descriptors of weather conditions based on sensor data, etc. The digest data also includes or is associated with (e.g., as metadata) time information and location information associated with at least one dataset of the datasets 304.”)
As to dependent Claim 11, Claim 11 is a parallel storage medium claim with limitations similar to that of Claim 2 and is rejected under the same rational.
As to dependent Claim 12, Claim 12 is a parallel storage medium claim with limitations similar to that of Claim 3 and is rejected under the same rational.
As to dependent Claim 13, Claim 13 is a parallel storage medium claim with limitations similar to that of Claim 4 and is rejected under the same rational.
As to dependent Claim 14, Claim 14 is a parallel storage medium claim with limitations similar to that of Claim 5 and is rejected under the same rational.
As to dependent Claim 15, Claim 15 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the same rational.
As to dependent Claim 16, Claim 16 is a parallel storage medium claim with limitations similar to that of Claim 6 and is rejected under the s