DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/25/2025 has been entered.
Response to Arguments
Applicant's arguments filed 09/22/2025 (“Remarks”) have been fully considered but they are not persuasive.
Regarding the 103 rejections, applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Applicant's arguments are directed at the amendment. Please see below for new grounds of rejection, necessitated by Amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 7-8, 11-12, 15, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Steenwinckel, et al., Non-Patent Literature “FLAGS: A methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning” (“Steenwinckel”) in view of Liu, et al., Non-Patent Literature “Time-constrained dynamic semantic compression for video indexing and interactive searching” (“Liu”) and further in view of Zheng, et al., Non-Patent Literature “An Efficient Preference-Based Sensor Selection Method in Internet of Things” (“Zheng”), Dos Santos, et al., Non-Patent Literature “Underwater Sonar and Aerial Images Data Fusion for Robot Localization” (“Dos Santos”), Halilaj, et al., Non-Patent Literature “A Knowledge Graph-based Approach for Situation Comprehension in Driving Scenarios” (“Halilaj”), and Park, et al., Non-Patent Literature “Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks” (“Park”).
Regarding claim 1 and analogous claims 11 and 20, Steenwinckel discloses:
A method comprising: making, by a device, an inference about an event indicated by sensor data from a plurality of sources (Steenwinckel, pg. 35 col. 1, “In the first phase, both data- and knowledge-driven techniques are used in parallel. They take as input the data streams provided by one or more sensors [by sensor data from a plurality of sources], together with case-specific context data. Faults (knowledge-driven) or outliers (data-driven) are outputted. If possible, an interpretation of the detected anomalies is provided.” [A method comprising: making, by a device, an inference about an event indicated]).
by applying a semantic reasoning engine to the sensor data, (Steenwinckel, pg. 35 col. 1-2, “This enriched stream is fed to the Semantic FD/RCA module. This module keeps track of a window of semantic observations and uses a semantic reasoner [by applying a semantic reasoning engine to the sensor data,] on these windows to infer semantic rules provided by the experts. When such a rule is triggered, a known fault is detected.”).
wherein the sensor data from the plurality of sources comprises time series data; (Steenwinckel, pg. 36 col. 1, “By exploiting all the semantic background information captured in the KG, comprehensive visualizations can easily be created. For example, anomalies can easily be grouped according to the context, system, configuration or the time frame [wherein the sensor data from a plurality of sources comprise time series data;] they occur in. Such visualizations allow the end-users to quickly get a view on the system behavior without being overloaded with information.”).
further wherein the semantic reasoning engine uses a…knowledge graph…to make the inference about the event. (Steenwinckel, pg. 32 col. 2, “When the constructed ontologies and rules have been generated, they can be used to annotate incoming raw data. The appropriate metadata is used to link them to the available background knowledge, as shown in the bottom part of Fig. 1. This results in a so-called Knowledge Graph (KG), where the data is linked with the domain metadata [uses a…knowledge graph]. Commonly available semantic reasoners [further wherein the semantic reasoning engine], e.g., Hermit [17] and Pellet [18], can be used to interpret this semantic data to detect possible faults […to make the inference about the event.] and infer causes using the ontology and rules.”).
…from a user interface… (Steenwinckel, pg. 35 col. 1, “During the second phase, the detected anomalies are shown in a comprehensive dashboard […from a user interface…]. Both the associated raw data and an interpretation, if available, are shown as well. The user can then provide feedback, e.g., confirm the anomalies and faults, merge them, or edit them. The feedback is also stored inside the KG.”).
selecting, by the device, a subset of the sensor data based on the inference… (Steenwinckel, pg. 35 col. 1, “During the second phase, the detected anomalies are shown in a comprehensive dashboard. Both the associated raw data and an interpretation, if available, are shown as well; showing the anomalies is interpreted as selecting a subset of sensor data based on the inference [i.e. selecting, by the device, a subset of the sensor data based on the inference…]. The user can then provide feedback, e.g., confirm the anomalies and faults, merge them, or edit them.”).
and exporting, from the device, the subset of the sensor data and the inference made by the semantic reasoning engine about the event. (Steenwinckel, pg. 36 col. 1, “faults are inserted in the KG, a dashboard visualization can be made to investigate the problem. By exploiting all the semantic background information captured in the KG, comprehensive visualizations can easily be created; creating fault/anomaly visualizations on the dashboard are interpreted as exporting the subset of sensor data based on the inference by the reasoning engine [i.e. and exporting, from the device, the subset of the sensor data and the inference made by the semantic reasoning engine about the event.]. For example, anomalies can easily be grouped according to the context, system, configuration or the time frame they occur in. Such visualizations allow the end-users to quickly get a view on the system behavior without being overloaded with information.”).
While Steenwinckel teaches a system that uses a semantic engine with a knowledge graph to export a subset of sensor data, Steenwinckel does not explicitly teach:
…hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer…
controlling, by the device, a focus-of-attention process over the hierarchical knowledge graph to select regions of the hierarchical knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints;
receiving, at the device, a selected semantic compression level…that narrows the sensor data from the plurality of sources by time and type
…and on the selected semantic compression level according to the time and type, wherein the subset of the sensor data comprises sonar data obtained from a sensor on a ship to monitor conditions of the ship;
Liu teaches receiving, at the device, a selected semantic compression level… (Liu, pg. 532-533, “By following this data structure, all frames can be referenced from the top level according to their significance. This provides a flexible video indexing scheme: a user can retrieve and view any level of semantic detail of the video.” [receiving, at the device, a selected semantic compression level…]).
A person having ordinary skill in the art would reasonably find the teachings of Liu to solve the problem of selecting a semantic compression level present in Steenwinckel. In view of the teachings of Liu it would have been obvious for a person of ordinary skill in the art to apply the teachings of Liu to Steenwinckel before the effective filing date of the claimed invention in order to improve data efficiency by removing redundancy (cf. Liu, pg. 537 col. 2, “It is also motivated by the need for a video indexing and summarization method that is user-tunable, particular in domains in which there is little formal shot structure and a high amount of frame-to-frame redundancy. The video compression method uses a new approach of selecting key frames by discarding semantically less significant frames, while recording their significance in a retrievable data structure.”).
While Steenwinckel in view of Liu teaches a system that uses a semantic engine with a knowledge graph to export a subset of sensor data, the combination does not explicitly teach:
…hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer…
controlling, by the device, a focus-of-attention process over the hierarchical knowledge graph to select regions of the hierarchical knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints;
…that narrows the sensor data from the plurality of sources by time and type;
…according to the time and type, wherein the subset of the sensor data comprises sonar data obtained from a sensor on a ship to monitor conditions of the ship;
Zheng teaches a selected semantic compression level from a user interface…that narrows the sensor data from the plurality of sources by time and type; (Zheng, pg. 168542 col. 2 and Figure 2 below,
PNG
media_image1.png
375
814
media_image1.png
Greyscale
“In this model, users are required to input data such as sensor location, type and sensor attribute value preference; in Figure 2, sensor preferences of response time and startup time are interpreted as narrowing the sensor data using time parameters [i.e. …that narrows the sensor data from the plurality of sources by time and type;]. In order to facilitate users to operate, we design the user page. shown in figure 2. As shown in figure 2, after fill the sensor information in the red square circle, the user click the ‘‘Submit’’ button to submit the information to the server. Then the server puts the descriptive information of the sensor recommended by model in ‘‘Result show’’ module. Finally, the users need to select content options in ‘‘Result show’’ module and submit results.”).
Steenwinckel, in view of Liu, and Zheng are both in the same field of endeavor (i.e. sensor data analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu, and Zheng to teach the above limitation(s). The motivation for doing so is to increase response time and accuracy by reducing dataset size based on user preferences (cf. Zheng, Abstract, “Specifically, this proposed method mainly includes three parts: 1) Offline constructing R-tree to search sensor resources and narrowing the size of dataset according to user’s preference; 2) Using an improved fast nondominated sorting approach to get nondominated front; 3) Employing TOPSIS to characterize every sensor option of the nondominated front. In order to illustrate the usability of the model, we conduct experiments on several simulation datasets. Experimental results show that this method outperforms several baselines in terms of both response time and accuracy.”).
While Steenwinckel in view of Liu and Zheng teaches a system that uses a semantic engine with a knowledge graph to export a subset of sensor data by time and type, the combination does not explicitly teach:
…hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer…
controlling, by the device, a focus-of-attention process over the hierarchical knowledge graph to select regions of the hierarchical knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints;
…wherein the subset of the sensor data comprises sonar data obtained from a sensor on a ship to monitor conditions of the ship;
Dos Santos teaches …wherein the subset of the sensor data comprises sonar data obtained from a sensor on a ship to monitor conditions of the ship; (Dos Santos, Abstract, “Autonomous underwater navigation is a challenging problem because of the limitations imposed by aquatic environments. Among them, the use of Global Positioning System (GPS) is severely limited. Thus, we propose the use of sensor fusion to improve underwater localization in partially structured environments. We sustain our proposal explores the benefits of aerial images, such as georeferencing, to improve underwater navigation [to monitor conditions of the ship;] with a multibeam forward looking sonar […wherein the subset of the sensor data comprises sonar data obtained from a sensor on a ship].”).
Steenwinckel, in view of Liu and Zheng, and Dos Santos are both in the same field of endeavor (i.e. sensor data analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu and Zheng, and Dos Santos to teach the above limitation(s). The motivation for doing so is that sensor data aids in underwater exploration of underwater vehicles (cf. Dos Santos, pg. 578 col. 1-2, “improves underwater navigation in partially structured places by fusing aerial and underwater acoustic images. Optical sensors are efficient on aerial applications and limited in underwater. Oppositely, sonar are efficient underwater and limited on air.”).
While Steenwinckel in view of Liu, Zheng, and Dos Santos teaches a system that uses a semantic engine with a knowledge graph to export a subset of sonar data, the combination does not explicitly teach:
…hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer…
controlling, by the device, a focus-of-attention process over the hierarchical knowledge graph to select regions of the hierarchical knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints;
Halilaj teaches …hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer… (Halilaj, pg. 705 see Figure 3, “An excerpt of the CoSI KG representing respective situations occurring in two consecutive scenes: 1) the bottom layer depicts scenery information among participants; 2) the top layer includes concepts such as classes and relationships representing the domain knowledge; and 3) the middle layer contains concrete instances capturing the scenery information based on the ontological concepts; the lowest layer shows that each scenery element is associated with multiple raw sensor measurements [i.e. …hierarchical knowledge graph, comprising multiple layers ranging from concepts at a highest layer and raw sensor measurements from a plurality of sources at a lowest layer…].”).
Steenwinckel, in view of Liu, Zheng, and Dos Santos, and Halilaj are both in the same field of endeavor (i.e. sensor data analysis). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu, Zheng, and Dos Santos, and Halilaj to teach the above limitation(s). The motivation for doing so is that using a multi-layer knowledge graph allows for better interpretation of semantics between different elements (cf. Halilaj, pg. 713, “Knowledge graphs are powerful in encapsulating and representing prior knowledge, leveraging rich semantics and ontological structures. Additional rules and axioms encoded manually support the reasoning process where new facts are inferred from the existing ones.”).
While Steenwinckel in view of Liu, Zheng, Dos Santos, and Halilaj teaches a system that uses a semantic engine with a hierarchical knowledge graph to export a subset of sonar data, the combination does not explicitly teach:
controlling, by the device, a focus-of-attention process over the…knowledge graph to select regions of the…knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints;
Park teaches controlling, by the device, a focus-of-attention process over the…knowledge graph to select regions of the…knowledge graph for application of semantic reasoning by the semantic reasoning engine based on context and resource constraints; (Park, abstract, “we develop GENI, a GNN-based method designed to deal with distinctive challenges involved with predicting node importance in KGs [to select regions of the…knowledge graph]. Our method performs an aggregation of importance scores instead of aggregating node embeddings via predicate-aware attention mechanism [controlling, by the device, a focus-of-attention process over the…knowledge graph] and flexible centrality adjustment.” and Park, pg. 596 col. 2, “Given a KG, estimating the importance of each node is a crucial task that enables a number of applications [for application of semantic reasoning by the semantic reasoning engine]…As validating information in KGs requires a lot of resources due to their size and complexity, node importance [based on context] can be used to guide the system to allocate limited resources [and resource constraints;] for entities of high importance.”).
Steenwinckel, in view of Liu, Zheng, Dos Santos, and Halilaj, and Park are both in the same field of endeavor (i.e. knowledge graphs). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu, Zheng, Dos Santos, and Halilaj, and Park to teach the above limitation(s). The motivation for doing so is that identifying node importance in a knowledge graph allows for better interpretation of semantics between different elements of the graph (cf. Park, pg. 604 col. 1, “we present a method GENI that addresses this problem by utilizing rich information available in KGs in a flexible manner which is required to model complex relation between entities and their importance.”).
Regarding claim 2 and analogous claim 12, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 1. Liu further teaches wherein the subset of the sensor data comprises a video frame. (Liu, Abstract, “What exits the buffer forms a highly compressed video stream consisting of only the most semantically significant video frames [wherein the subset of the sensor data comprises a video frame.], whereas the LHDAG permits efficient semantic exploration of the video interior.”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Liu with the teachings of Steenwinckel, Zheng, Dos Santos, Halilaj, and Park for the same reasons disclosed in claim 1.
Regarding claim 5 and analogous claim 15, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 1. Steenwinckel further teaches wherein the inference indicates that the event is a benign event. (Steenwinckel, pg. 30, col. 2, “In the area of sensor networks and streaming data, AD consists of finding unknown patterns or outliers in unlabeled data when something unusual occurs or when the conditions deviate from the normal behavior [5]. FR captures the stronger patterns as the condition develops, and the system’s operation deteriorates towards failure. Once a pattern for a specific fault has been identified, it can be referenced in the future when the pattern emerges again.”; if there isn’t a triggered event by an anomaly or fault then it is interpreted as a benign event (i.e. wherein the inference indicates that the event is a benign event.)).
Regarding claim 7 and analogous claim 17, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 1. Steenwinckel further teaches wherein the inference is that the event is of an unknown type. (Steenwinckel, pg. 30 col. 2, “In the area of sensor networks and streaming data, AD consists of finding unknown patterns or outliers in unlabeled data [wherein the inference is that the event is of an unknown type.] when something unusual occurs or when the conditions deviate from the normal behavior [5]. FR captures the stronger patterns as the condition develops, and the system’s operation deteriorates towards failure. Once a pattern for a specific fault has been identified, it can be referenced in the future when the pattern emerges again.”).
Regarding claim 8 and analogous claim 18, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 7. Steenwinckel further teaches further comprising: providing the subset of the sensor data to the user interface for labeling. (Steenwinckel, pg. 36 col. 1, “Aided by the visualizations, the user can then quickly provide feedback on these anomalies, faults and causes by indicating their correctness or relabeling them based on the user’s expert knowledge.” [further comprising: providing the subset of the sensor data to the user interface for labeling.]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable Steenwinckel, et al., Non-Patent Literature “FLAGS: A methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning” (“Steenwinckel”) in view of Liu, et al., Non-Patent Literature “Time-constrained dynamic semantic compression for video indexing and interactive searching” (“Liu”) and further in view of Zheng, et al., Non-Patent Literature “An Efficient Preference-Based Sensor Selection Method in Internet of Things” (“Zheng”), Dos Santos, et al., Non-Patent Literature “Underwater Sonar and Aerial Images Data Fusion for Robot Localization” (“Dos Santos”), Halilaj, et al., Non-Patent Literature “A Knowledge Graph-based Approach for Situation Comprehension in Driving Scenarios” (“Halilaj”), Park, et al., Non-Patent Literature “Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks” (“Park”), and Ramirez-Amaro, et al., Non-Patent Literature “Automatic Segmentation and Recognition of Human Activities from Observation based on Semantic Reasoning” (“Ramirez-Amaro”).
Regarding claim 6 and analogous claim 16, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 1. However, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park does not explicitly teach wherein the event corresponds to an activity by a person.
Ramirez-Amaro teaches wherein the event corresponds to an activity by a person. (Ramirez-Amaro, pg. 5044 col. 1, “In this paper, we propose a framework that combines the information from different signals via semantic reasoning to enable robots to segment and recognize human activities [wherein the event corresponds to an activity by a person.] by understanding what it sees from videos (see Fig. 1).”).
Steenwinckel, in view of Liu, Zheng, Dos Santos, Halilaj, and Park, and Ramirez-Amaro are both in the same field of endeavor (i.e. semantic event detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu, Zheng, Dos Santos, Halilaj, and Park, and Ramirez-Amaro to teach the above limitation(s). The motivation for doing so is to improve natural communication with robots (cf. Ramirez-Amaro, pg. 5048 col. 1, “Further advantages of our system are its scalability, adaptability and intuitiveness which allow a more natural communication with artificial system such as robots.”).
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Steenwinckel, et al., Non-Patent Literature “FLAGS: A methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning” (“Steenwinckel”) in view of Liu, et al., Non-Patent Literature “Time-constrained dynamic semantic compression for video indexing and interactive searching” (“Liu”) and further in view of Zheng, et al., Non-Patent Literature “An Efficient Preference-Based Sensor Selection Method in Internet of Things” (“Zheng”), Dos Santos, et al., Non-Patent Literature “Underwater Sonar and Aerial Images Data Fusion for Robot Localization” (“Dos Santos”), Halilaj, et al., Non-Patent Literature “A Knowledge Graph-based Approach for Situation Comprehension in Driving Scenarios” (“Halilaj”), Park, et al., Non-Patent Literature “Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks” (“Park”), and Greco, et al., Non-Patent Literature “IoT and semantic web technologies for event detection in natural disasters” (“Greco”).
Regarding claim 9 and analogous claim 19, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park teaches the method as in claim 1. However, Steenwinckel in view of Liu, Zheng, Dos Santos, Halilaj, and Park does not explicitly teach wherein the inference indicates that the event represents a hazardous condition.
Greco teaches wherein the inference indicates that the event represents a hazardous condition. (Greco, Summary, “In this paper, we demonstrate how Internet of Things and Semantic Web technologies can be effectively used for abnormal event detection in the contest of an earthquake.” [wherein the inference indicates that the event represents a hazardous condition.]).
Steenwinckel, in view of Liu, Zheng, Dos Santos, Halilaj, and Park, and Greco are both in the same field of endeavor (i.e. semantic event detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Steenwinckel, in view of Liu, Zheng, Dos Santos, Halilaj, and Park, and Greco to teach the above limitation(s). The motivation for doing so is to reduce damages and loss of life (cf. Greco, pg. 1 ⁋1, “In the same way, a smarter detection of mudslides, avalanches, earthquakes, and other natural disasters can drastically help reducing reaction times, decrease the loss of life, and mitigate the damages”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Natanzon, et al., US10893296B2 discloses generating a subset of sensor data from a plurality of sensors based on a proximity based compression criterion.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.S.W./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148