Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,373

LOG DETERMINATION DEVICE, LOG DETERMINATION METHOD, LOG DETERMINATION PROGRAM, AND LOG DETERMINATION SYSTEM

Final Rejection §103§112
Filed
Sep 25, 2023
Examiner
VU, TAYLOR P
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
DENSO CORPORATION
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
21 granted / 26 resolved
+22.8% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The office actin is in response to the applicant’s filing of the Remarks on 10/02/2025. Claims 1, 6, 7, 11, 12, 13-17 have been amended. Claim 15 has been cancelled. Claims 18-22 have been added. Claims 1-14 and 16-22 are currently pending. Applicant's arguments filed 10/02/2025, with respect to claim 1-10 and 13 under USC 112(b) based on the interpretation of the claims under 35 USC 112(f) , as seen in page 18, have fully considered and persuasive. However, Applicant’s arguments with respect to claim 1, 3, 10-14, 16, and 17 under USC 103 over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Abdelaziz et al. (US PGPub No. 20200089848-A1), specifically to the amended limitation, have been fully considered but not fully persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, an new grounds of rejection is made in view of Endo et al. (US PGPub No.20230054840-A1) , Ford et al. (US PGPub No. 20210351973-A1), and Komano et al. (US PGPub No. 20170139795-A1 ) The office action have been updated reflecting the claims are currently presented. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21 and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 21 and 22 recite, “wherein the act of maintaining the electronic control system of the vehicle is performed by a maintenance worker at the predetermined place including the factory.”. The claims are indefinite because it provides not objective boundaries or metrics by which a person having an ordinary skill in the art can determine or constitutes as a “maintenance worker”, not it is clear whether this refers to a mechanic, quality assurance tester, mobile automotive inspector, or something else. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 3 , 10-14, 16-19, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1) and Abdelaziz et al. (US PGPub No. 20200089848-A1) . With respect to claim 1, Galula teaches log determination device comprising: a computer including a memory storing a program and a processor configured to execute the program causing the computer to implement: a log acquisition unit that is configured to acquire a plurality of security logs each including an abnormality information indicating an abnormality (¶0017-0025: As illustrated in Figure 1, computing device 100 may include a controller 105 that may a hardware controller. For example, controller 105 may be, or may include, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 115, a memory 120, executable code 125, a storage system 130, input devices 135 and output devices 140. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) detected in an electronic control system mounted on a vehicle (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).) and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ); a pattern storage unit that is configured to store an occurrence pattern of a security log (¶0069 & ¶0077: Correlation as described may be according to any relevant aspect. For example, server 210 may correlate (compare, relate or examine together) reports or data from/of vehicles that were serviced at a specific serve facility, vehicles that include component from a specific manufacturer, vehicles that were sold in the last there months, and so on. Correlation may include examining reports or data related to a plurality of vehicles and looking for similar patterns or events e.g., according to time, place, specific hardware, specific software, or any other aspect as described.) which is predicted to occur due to a maintenance of the electronic control system [that is an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory], (¶0040-0045: Server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ); the occurrence pattern including a plurality of sets each including a prediction abnormality information indicating an abnormality which is predicted to be detected in the electronic control system (¶0094-0097: An embodiment may identify a vehicle’s state or context in time of attack (e.g., speed, ABS worker, ADAS etc.) to investigate if a certain state is common to attacks or might trigger it. For example, based on data in aggregated data 131 and/or based on messages received from vehicles 220, server 210 may identify, with respect to a specific attack, specific attack or specific attack type, the ABS system in a majority of (or even all) attacked vehicles was not working or the engine heath was increasing rapidly and so on. Accordingly an embodiment may link or associate an attack (or attacker, or attack type) with a state, condition or context of vehicles such that based on state, condition or context, an attack may be identified or even predicted. For example, by linking attacks to states as described, it may be discovered that, when a vehicle is in a specific state or condition, a specific ECU is vulnerable or is susceptible to attack.) and a prediction abnormality position information indicating a position of the abnormality that is predicted to be detected in the electronic control system; and (¶0094-0097: An embodiment may group similar attacks by sequences of anomalies to identify if coming from the source. For example, server 210 may examine logs related to attacks, identify therein similar aspects, e.g., similar anomalies such as frequency of messages, types of errors, similar timing or content sequences or other attributes of messages and associate attacks that exhibit similar attributes to a specific source (e.g., to a specific attacker)). Galula does not disclose: an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory However, Endo teaches an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory(¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Endo regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: a false positive log determination unit that is configured to compare the plurality of security logs with the occurrence pattern predicted to occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory, to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance. However, Abdelaziz teaches a false positive log determination unit that is configured to compare the plurality of security logs with the occurrence pattern predicted to [occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory,] to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance.] (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 3, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above), wherein: each of the plurality of security logs includes a time information indicating a time when the abnormality has been detected; (Galula ¶0120-0122: In some embodiments, server 210 is adapted to identify a false positive detection based on the classification. For example, specific events or sequences of events may be classified as legitimate or non-threats, accordingly, although, based on an event, server 210 may otherwise determine a cyberthreat or attack is present, based on a classification (indicating time when abnormality has been detected) . For example, server 210 may obtain or be provided with (e.g., by a user) data describing cyberattacks (e.g., specific sequences of events known to occur when specific attacks are launched). ). the pattern storage unit further stores a prediction period indicating a period that is predicted from a time when the predicted abnormality is first detected to a time when the predicted abnormality is last detected; and (Galula ¶0121: In some embodiments, server 210 is adapted to identified previously undetected threats by correlating historical data with newly identified hacks. For example, server 210 may obtain or be provided with (e.g., by a user) data describing cyber-attacks (e.g., specific sequences of events known to occur when specific attacks are launched) (first detected to a time to an anomality is last detected) . By correlating data describing attacks with information in reports 133, server logs 132 and/or server data 135, server 210 may identify attacks that occurred in the past. ). the false positive log determination unit further compares a period from a first time information that is an earliest time information among the time information of the security logs to a second time information that is a latest time information among the time information with the prediction period, (Galula ¶0122: For example, server 210 may use predefined sequences of events that, when detected, indicate an anomaly or a cyber-threat or attack. In other cases, sets of events that, if seen together, e.g., occur at the same time or within a predefined time interval are defined and used for identifying cyber-threats, e.g., if a first and second predefined events occur within a time interval of 20 milliseconds (ms) then server 210 may determine an attack is in progress.)) and determines whether or not the plurality of security logs is the false positive log. (Galula ¶0120: In some embodiments, server 210 is adapted to identify a false positive detection based on the classification. For example, specific events or sequences of events may be classified as legitimate or non-threats, accordingly, although, based on an event, server 210 may otherwise determine a cyber threat or attack is present, based on a classification of the event server 210 may determine that identifying the event as representing a cyber-attack will be a false-positive detection.). With respect to claim 10, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above), wherein: the electronic control system is mounted on a movable object; and the log determination device is provided outside the movable object. (Galula ¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220 and as seen in Figure 2, the server is outside the vehicle. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs) (implying electronic control unit is mounted in a vehicle which is a moveable object).) With respect to claim 11, Galula teaches a log determination method executable by a log determination device, (Abstract: A system and method for providing fleet cyber security comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cyber security and including the information in reports to a server. Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data). the log determination device including a pattern storage unit that is configured to store an occurrence pattern of a security log (¶0069 & ¶0077: Correlation as described may be according to any relevant aspect. For example, server 210 may correlate (compare, relate or examine together) reports or data from/of vehicles that were serviced at a specific serve facility, vehicles that include component from a specific manufacturer, vehicles that were sold in the last there months, and so on. Correlation may include examining reports or data related to a plurality of vehicles and looking for similar patterns or events e.g., according to time, place, specific hardware, specific software, or any other aspect as described.) which is predicted to occur due to a maintenance of an electronic control system that is [an act of maintaining the electronic system at a predetermined place including a factory,] (¶0040-0043: Server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ) the electronic control system being mounted on a vehicle, (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).); the occurrence pattern including a plurality of sets each including a prediction abnormality information indicating an abnormality which is predicted to be detected in the electronic control system (¶0094-0097: An embodiment may identify a vehicle’s state or context in time of attack (e.g., speed, ABS worker, ADAS etc.) to investigate if a certain state is common to attacks or might trigger it. For example, based on data in aggregated data 131 and/or based on messages received from vehicles 220, server 210 may identify, with respect to a specific attack, specific attack or specific attack type, the ABS system in a majority of (or even all) attacked vehicles was not working or the engine heath was increasing rapidly and so on. Accordingly an embodiment may link or associate an attack (or attacker, or attack type) with a state, condition or context of vehicles such that based on state, condition or context, an attack may be identified or even predicted. For example, by linking attacks to states as described, it may be discovered that, when a vehicle is in a specific state or condition, a specific ECU is vulnerable or is susceptible to attack.) and a prediction abnormality position information indicating a position of the abnormality that is predicted to be detected in the electronic control system, (¶0094-0097: An embodiment may group similar attacks by sequences of anomalies to identify if coming from the source. For example, server 210 may examine logs related to attacks, identify therein similar aspects, e.g., similar anomalies such as frequency of messages, types of errors, similar timing or content sequences or other attributes of messages and associate attacks that exhibit similar attributes to a specific source (e.g., to a specific attacker)). the method comprising: acquiring a plurality of security logs (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) each including an abnormality information indicating an abnormality detected in an electronic control system and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). Galula does not disclose: an act of maintaining the electronic system at a predetermined place including a factory, However, Endo teaches an act of maintaining the electronic system at a predetermined place including a factory, (¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Endo regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: comparing the plurality of security logs with the occurrence pattern predicted to [occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory;] determining whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance; and outputting a determination result. However, Abdelaziz teaches comparing the plurality of security logs with the occurrence pattern predicted to [occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory;] determining whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance;] and outputting a determination result. (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 12, Galula teaches a non-transitory computer-readable storage medium storing a log determination program executable by a log determination device, (¶0017-0022: Memory 120 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM. Some embodiments may include a non-transitory storage medium having stored thereon instructions which when executed cause the processor to carry out methods disclosed herein.). the log determination device including a pattern storage unit that is configured to store an occurrence pattern of a security log (¶0069 & ¶0077: Correlation as described may be according to any relevant aspect. For example, server 210 may correlate (compare, relate or examine together) reports or data from/of vehicles that were serviced at a specific serve facility, vehicles that include component from a specific manufacturer, vehicles that were sold in the last there months, and so on. Correlation may include examining reports or data related to a plurality of vehicles and looking for similar patterns or events e.g., according to time, place, specific hardware, specific software, or any other aspect as described.) which is predicted to occur due to a maintenance of an electronic control system that is [an act of maintaining the electronic control system at a predetermined place including a factory,] (¶0040-0043: Server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ) the electronic control system being mounted on a vehicle (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).); the occurrence pattern including a plurality of sets each including a prediction abnormality information indicating an abnormality which is predicted to be detected in the electronic control system (¶0094-0097: An embodiment may identify a vehicle’s state or context in time of attack (e.g., speed, ABS worker, ADAS etc.) to investigate if a certain state is common to attacks or might trigger it. For example, based on data in aggregated data 131 and/or based on messages received from vehicles 220, server 210 may identify, with respect to a specific attack, specific attack or specific attack type, the ABS system in a majority of (or even all) attacked vehicles was not working or the engine heath was increasing rapidly and so on. Accordingly an embodiment may link or associate an attack (or attacker, or attack type) with a state, condition or context of vehicles such that based on state, condition or context, an attack may be identified or even predicted. For example, by linking attacks to states as described, it may be discovered that, when a vehicle is in a specific state or condition, a specific ECU is vulnerable or is susceptible to attack.) and a prediction abnormality position information indicating a position of the abnormality that is predicted to be detected in the electronic control system, (¶0094-0097: An embodiment may group similar attacks by sequences of anomalies to identify if coming from the source. For example, server 210 may examine logs related to attacks, identify therein similar aspects, e.g., similar anomalies such as frequency of messages, types of errors, similar timing or content sequences or other attributes of messages and associate attacks that exhibit similar attributes to a specific source (e.g., to a specific attacker)). the program comprising instructions of: acquiring a plurality of security logs (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) each including an abnormality information indicating an abnormality detected in an electronic control system and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). Galula does not disclose: an act of maintaining the electronic control system at a predetermined place including a factory, However, Endo teaches an act of maintaining the electronic control system at a predetermined place including a factory, (¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Endo regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view Endo does not disclose: comparing the plurality of security logs with the occurrence pattern predicted to occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including factory; determining whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance; and outputting a determination result. However, Abdelaziz teaches comparing the plurality of security logs with the occurrence pattern predicted to occur due to [the act of maintaining the electronic control system of the vehicle at the predetermined place including factory;] determining whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance;] and outputting a determination result. (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 13, Galula teaches a log determination system comprising: (Abstract: A system and method for providing fleet cyber security comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cyber security and including the information in reports to a server. Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data). an electronic control system mounted on a vehicle ; and a log determination device, (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).) wherein: the electronic control system includes a computer including a memory storing a program and a processor configured to execute program causing the computer to implement: (¶0029-0031:As shown in Figure 2, Server 210 (log determination device) may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components (electronic control system) in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs).). a log generation unit that is configured to generate a security log (¶0107: As described, a system according to some embodiments includes a server 210 and a plurality of DCUs 221 installed in a respective plurality of vehicles in a fleet of vehicles. In some embodiments, to provide fleet cyber-security, DCUs 221 are adapted to collect information related to cyber security and to include the information in reports sent to server 210. For example, reports from DCUs may be stored by server 210 as shown by reports 133 and reports 133 may be used to produce aggregated data 131.) including an abnormality information indicating an abnormality and a position information indicating a position of the abnormality detected in the electronic control system, in a case where the abnormality has been detected in the electronic control system, and a log transmission unit that is configured to transmit the security log to the log determination device; and (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). the log determination device includes a log acquisition unit that is configured to acquire a plurality of security logs transmitted from the log transmission unit, (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.). a pattern storage unit that is configured to store an occurrence pattern of a security log (¶0069 & ¶0077: Correlation as described may be according to any relevant aspect. For example, server 210 may correlate (compare, relate or examine together) reports or data from/of vehicles that were serviced at a specific serve facility, vehicles that include component from a specific manufacturer, vehicles that were sold in the last there months, and so on. Correlation may include examining reports or data related to a plurality of vehicles and looking for similar patterns or events e.g., according to time, place, specific hardware, specific software, or any other aspect as described.) which is predicted to occur due to a maintenance of an electronic control system that is [an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory,] (¶0040-0043: Server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ). the occurrence pattern including a plurality of sets each including a prediction abnormality information indicating an abnormality which is predicted to be detected in the electronic control system (¶0094-0097: An embodiment may identify a vehicle’s state or context in time of attack (e.g., speed, ABS worker, ADAS etc.) to investigate if a certain state is common to attacks or might trigger it. For example, based on data in aggregated data 131 and/or based on messages received from vehicles 220, server 210 may identify, with respect to a specific attack, specific attack or specific attack type, the ABS system in a majority of (or even all) attacked vehicles was not working or the engine heath was increasing rapidly and so on. Accordingly an embodiment may link or associate an attack (or attacker, or attack type) with a state, condition or context of vehicles such that based on state, condition or context, an attack may be identified or even predicted. For example, by linking attacks to states as described, it may be discovered that, when a vehicle is in a specific state or condition, a specific ECU is vulnerable or is susceptible to attack.) and a prediction abnormality position information indicating a position of the abnormality that is predicted to be detected in the electronic control system, and (¶0094-0097: An embodiment may group similar attacks by sequences of anomalies to identify if coming from the source. For example, server 210 may examine logs related to attacks, identify therein similar aspects, e.g., similar anomalies such as frequency of messages, types of errors, similar timing or content sequences or other attributes of messages and associate attacks that exhibit similar attributes to a specific source (e.g., to a specific attacker)). Galula does not disclose: an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory, However, Endo teaches an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory, (¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: a false positive log determination unit that is configured to compare the plurality of security logs with the occurrence pattern predicted to occur due to the act of maintaining the electronic control system of the vehicle of the vehicle at the predetermined place including the factory, determine whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance, and output a determination result. However, Abdelaziz teaches a false positive log determination unit that is configured to compare the plurality of security logs with the occurrence pattern predicted to occur due to [the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory,] determine whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance,] and output a determination result. (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 14, Galula teaches a log determination device comprising: a processor and a memory storing a program that causes the processor to perform: (¶0019-0022: Memory 120 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM. Some embodiments may include a non-transitory storage medium having stored thereon instructions which when executed cause the processor to carry out methods disclosed herein. ). acquiring a plurality of security logs (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) each including an abnormality information indicating an abnormality detected in an electronic control system mounted on a vehicle (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).) and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). acquiring a feature of a security log which is predicted to occur due to a maintenance of the electronic control system that is [an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory], from a storage storing the feature; and (¶0040-0043: As seen in combination of Figure 1 and Figure 2, wherein the server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ). Galula does not disclose: an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory However, Endo teaches an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory(¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: comparing the plurality of security logs with the feature of the security log predicted occur due to the act of maintaining the electronic control system of the vehicle at a predetermined place including the factory, to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance. However, Abdelaziz teaches comparing the plurality of security logs with the feature of the security log predicted occur due to [the act of maintaining the electronic control system of the vehicle at a predetermined place including the factory,] to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance.] (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 16, Galula teaches a non-transitory computer-readable storage medium storing a program executable by a computer, wherein the program, when executed by the computer, causes the computer to perform: (¶0017-0022: Memory 120 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM. Some embodiments may include a non-transitory storage medium having stored thereon instructions which when executed cause the processor to carry out methods disclosed herein.). acquiring a plurality of security logs (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) each including an abnormality information indicating an abnormality detected in an electronic control system mounted on a vehicle (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).)and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). acquiring a feature of a security log which is predicted to occur due to a maintenance of the electronic control system that is [act of maintaining the electronic control system of the vehicle at a predetermined place including a factory], from a storage storing the feature of the security log; and (¶0040-0043: As seen in combination of Figure 1 and Figure 2, wherein the server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ). Galula does not disclose: act of maintaining the electronic control system of the vehicle at a predetermined place including a factory However, Endo teaches act of maintaining the electronic control system of the vehicle at a predetermined place including a factory(¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: comparing the plurality of security logs with the feature of the security log predicted to occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory, to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance. However, Abdelaziz teaches comparing the plurality of security logs with the feature of the security log predicted to occur due to [the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory,] to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance. ] (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 17, Galula teaches a system comprising a vehicle and a log determination device placed outside the vehicle, (Abstract: A system and method for providing fleet cyber security comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cyber security and including the information in reports to a server. Data in reports may be aggregated, by the server (placed out outside of the vehicle also shown in Figure 2). A cyberattack may be identified based on aggregated data). wherein the vehicle includes an electronic control system mounted on a vehicle, (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).) the electronic control system includes a plurality of electronic control units connected via a network in the vehicle, (¶0033: Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs).). each electronic control unit includes one or more security sensors configured to detect an abnormality in the electronic control system, (¶0030: As seen in Figure 2, a DCU 221 may be any applicable unit, e.g., a sensor adapted to obtain information. For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle.). wherein the log determination device includes a processor and a memory storing a program that causes the processor to perform: acquiring a plurality of security logs (¶0017-0025: Reference is made to Figure 1 showing a high-level block diagram of a computer device to some embodiments of the present invention. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) each including an abnormality information indicating the abnormality detected in the electronic control system and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ). acquiring a feature of a security log which is predicted to occur due to a maintenance of the electronic control system that is [act of maintaining the electronic control system of the vehicle at a predetermined place including a factory], from a storage storing the feature; and (¶0040-0043: As seen in combination of Figure 1 and Figure 2, wherein the server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ). Galula does not disclose: act of maintaining the electronic control system of the vehicle at a predetermined place including a factory However, Endo teaches an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory(¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: comparing the plurality of security logs with the feature of the security log predicted to occur due to the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory, to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality caused by the maintenance. However, Abdelaziz teaches comparing the plurality of security logs with the feature of the security log predicted to occur due to [the act of maintaining the electronic control system of the vehicle at the predetermined place including the factory,] to make a determination of whether or not the plurality of security logs is a false positive log generated by detecting an abnormality [caused by the maintenance.] (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 18, Galula teaches a log determination device comprising: a computer including a memory storing a program and a processor configured to execute the program causing the computer to implement: a log acquisition unit that is configured to acquire a plurality of security logs each including an abnormality information indicating an abnormality (¶0017-0025: As illustrated in Figure 1, computing device 100 may include a controller 105 that may a hardware controller. For example, controller 105 may be, or may include, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 115, a memory 120, executable code 125, a storage system 130, input devices 135 and output devices 140. Abstract: A system and method for providing fleet cybersecurity comprising may include collecting, by a plurality of data collection units installed in a respective plurality of vehicles in the fleet, information related to cybersecurity and including the information in reports to a server (configured to acquire a plurality of security logs). Data in reports may be aggregated, by the server. A cyberattack may be identified based on aggregated data.) detected in an electronic control system mounted on a vehicle (¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs). (implying the electronic control unit is mounted on a vehicle).) and a position information indicating a position of the abnormality in the electronic control system; (¶0044-0045: Aggregation may include filtering true attacks from false-positives anomalous causes e.g., ECU malfunction/replacement, minor mismatches between expected behavior and actual vehicle traffic (abnormality detected in electronic control system). Aggregation may include cross referencing anomalies with ECU malfunctions (e.g., using ECU logs/ diagnostic/ car maintenance logs) and/or comparing a behavior of a vehicle (e.g., as exhibited by network traffic) over the in-vehicles exhibiting similar behaviors: in this manner false positives may be detected. Aggregation may include relating/comparing location of service access to/with dealership locations e.g., to identify is someone is tuning or hacking a vehicle or launching an attack. For example, using geolocation data as described (position information), server 210 can determine that an access to a component in a vehicle is made outside of a dealership visit log and/or GPS location data as described. Aggregation may include searching (e.g., in logs as described) for a plurality of requests originating from a specific or same device (e.g., based on an International Mobile Equipment Identify IMEI)) that appear to originate from different users. ); a storage unit that is configured to store a rule used to determine the security log that is (¶0069 & ¶0077: Correlation as described may be according to any relevant aspect. For example, server 210 may correlate (compare, relate or examine together) reports or data from/of vehicles that were serviced at a specific serve facility, vehicles that include component from a specific manufacturer, vehicles that were sold in the last there months, and so on. Correlation may include examining reports or data related to a plurality of vehicles and looking for similar patterns or events e.g., according to time, place, specific hardware, specific software, or any other aspect as described.) caused by [an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory;] and (¶0040-0045: Server 210 may store fleet data (e.g., in aggregated data 131). Fleet data may be created based on data received from a plurality of vehicle 220 and may include, for example, dealership and/or service station visit data, diagnostic feeds, in-vehicle update logs, ECU logs, GPS location of vehicles, server access logs, service provider data, vehicle software inventory, warranty data, weather reports vehicle, ECU authentication logs (success/and or failure), and so on. Server 210 may fuse, correlate and/or aggregate data from other sources. For example, data received from dealerships, weather sources, traffic reports, the internet and so on may be fused or combined with data received from vehicles 220 such that various cyber-security conclusions may be derived. In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. (storing an occurrence pattern of a security log). For example, the ability to fuse an ECU manufacturing DB (e.g., in server 260) into data available to server 210 can enable server 210 to determine that a virus comes from a specific manufacturer. ); Galula does not disclose: an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory However, Endo teaches an act of maintaining the electronic control system of the vehicle at a predetermined place including a factory(¶0022-0023: As shown in Figure 1, the maintenance information management device 10, the management server 12, an in-vehicle apparatus of the vehicle V. and the terminal P are connected to each other so that they can communicate with each other through a network N. In addition, the maintenance information management device 10 is connected to a maintenance factory. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding the act of maintaining the electronic control system of the vehicle at a predetermined place including a factory to the method of Galula in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Galula in view of Endo does not disclose: a false positive log determination unit that is configured to compare the plurality of security logs with the rule, to make a determination of whether or not the plurality of security logs is a false positive log not caused by a cyberattack. However, Abdelaziz teaches a false positive log determination unit that is configured to compare the plurality of security logs with the rule, to make a determination of whether or not the plurality of security logs is a false positive log not caused by a cyberattack. (¶0083 & ¶0096: As seen in Figure 8, the sign-in data comprises stored and/or real time user/ sign-in data that is evaluated against stored patterns and definitions, such as stored in the risk profiles report 830, label 835 or another detector definition data structure 840, which is stored by the cloud/service portal or that is otherwise accessible to the cloud/server portal. ) The computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input)). Although Galula discloses in ¶0082-0084 & ¶0109 detecting an abnormality caused by maintenance and determining a false positive log by occurrences stored in a server, but the prior art does not compare the overall aggregated data with specifically stored patterns. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Abdelaziz regarding comparing the aggregated data with stored patterns to the method of Galula in view of Endo in order to improve responsiveness, accuracy and effectiveness of security within a system (Abdelaziz ¶0033). With respect to claim 19, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 18 (see rejection of claim 18 above) wherein: the electronic control system includes a plurality of electronic control units and (Galula ¶0030-0044: As seen in Figure 2, A DCU 221 may be any applicable unit, e.g., a sensor adapted to obtain information. For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) (plurality of electronic control unit) in a vehicle.) is configured to detect the abnormality of a plurality of kinds, (Galula ¶0070: In some embodiments, server 210 may run or execute logic to detect security related events based on aggregated, correlated, grouped and/or fused data (detecting abnormality of a plurality of kinds) and in each security log: the abnormality information indicates, among the abnormality of the plurality of kinds, (Galula ¶0042-0069: Correlation may be according to time. For example, server 210 may correlate reports from a plurality of DCUs (in a respective plurality of vehicles) by examining portions of the reports that relate to a specific time interval and look for events that occurred at the same time or same time interval and another correlation may include examining and matching data received from two or more security systems, sensors or components in a vehicle. For example, other than correlating raw data (e.g., data related to engine status or state) with data received from security entities, server 210 may correlate data from two or more firewalls in a vehicle and/or an intrusion detection system, and etc.…) which abnormality has been detected, and the position information indicates, among the plurality of electronic control units, in which electronic control unit the abnormality has been detected. (Galula ¶0043: In some embodiments, aggregation may include examining, for a set of hacked or attacked vehicles, a manufacturing database (DB) in order to identify or determine whether all vehicles affected by an attack have one or more ECUs manufactured at the same factory and/or at the same date or time, thus a supply chain attack may be identified. Further in ¶0067 shows example of examining of a single electronic control units amongst a plurality of electronic control units wherein using time correlation, server 210 may detect that a specific, same ECU (e.g., an ECU that controls the infotainment system) in several vehicles stopped reporting between 10:32:45 (HH:MM:SS) and 10:36:24, in such case, server 210 may determine that the ECU that controls the infotainment system in the vehicles has been attacked or hacked (detecting an abnormality). ); With respect to claim 21, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 18 (see rejection of claim 18 above) wherein the act of maintaining the electronic control system of the vehicle is performed by a maintenance worker at the predetermined place including the factory. (Endo ¶0047: An occupant of the vehicle V may also operate the vehicle V directly or indirectly to send a maintenance information registration request. A mechanic may also operate the vehicle V directly or indirectly to send a maintenance information registration request.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Endo to the method of Galula in view of Abdelaziz in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). With respect to claim 22, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 14 (see rejection of claim 14 above) wherein the act of maintaining the electronic control system of the vehicle is performed by a maintenance worker at the predetermined place including the factory. (Endo ¶0047: An occupant of the vehicle V may also operate the vehicle V directly or indirectly to send a maintenance information registration request. A mechanic may also operate the vehicle V directly or indirectly to send a maintenance information registration request.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Endo to the method of Galula in view of Abdelaziz in order to allow the use of maintenance information for wide use such as for detecting false positives (Endo ¶0004-0005). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1) and Hao et al. (US PGPub No. 20070211647-A1 ) . With respect to claim 2, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above) but does not disclose wherein: the occurrence pattern includes an order of occurrence of the plurality of sets in addition to the plurality of sets; and the false positive log determination unit compares the plurality of security logs and the occurrence order of the plurality of security logs with the occurrence pattern, and determines whether or not the plurality of security logs is the false positive log. However, Hao teaches wherein: the occurrence pattern includes an order of occurrence of the plurality of sets in addition to the plurality of sets; and (¶0024: As seen in Figure 2, a memory 210 comprises a routing table 211 for routing packets within network 110, a comparison function 212 for comparing payloads of consecutively received packets, a predecessor table for maintaining a plurality of previous arrivals (e.g., k most recent packets received by 112) (order of occurrence) , a coincidence count table 213 for maintaining total coincidences count table 213 for flows identified 112 (e.g., flow based on complex patterns identified within payloads of packets belonging to the identified flows), a plurality of support tables 214, and a plurality of support processes 215.). the false positive log determination unit compares the plurality of security logs and the occurrence order of the plurality of security logs with the occurrence pattern, and determines whether or not the plurality of security logs is the false positive log. (¶0044-0053: In one embodiment, for example, in which a plurality of hash functions and a bloom filter are utilized for performing processing for identifying intersections between current payload P.sub.j and previous payload P.sub.j-1, the probability of a false positive match between the selected k-pattern from current payload P.sub.j and the comparison function for previous payload P.sub.j-1, although low, is not zero). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hao regarding the occurrence pattern order to the method of Galula in view of Endo and Abdelaziz order to more effectively monitor in which the starting point of a suspicious pattern is unknown (Hao ¶0004). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1), and Siddiq et al. (US PGPub No.20200412757-A1). With respect to claim 4, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above) but does not disclose, wherein: each of the plurality of sets includes a prediction number of times indicating a total number of times the predicted abnormality occurs; and the false positive log determination unit compares a total number of the security logs that have both the abnormality information and the position information in common with the prediction number of times to determine whether or not the plurality of security logs is a false positive log. However, Siddiq teaches wherein: each of the plurality of sets includes a prediction number of times indicating a total number of times the predicted abnormality occurs; and (¶0033 & ¶0066: Referring back to Figure 5, the extracted features can be input into a machine learning model in the NS appliance 200, which can apply machine learning model weighting to the feature vectors to predict a vulnerability at the node N31 (Step 330). As seen in feature matrix 400 (shown in Figure 6), the machine learning model can apply caring weights to the feature vectors, with greater weight being applied for feature vectors that represent a higher statistical likelihood of vulnerability (number of times predicted occurs) . The feature vectors can be prioritized and weighted based on the statistical likelihood of risk that a predicted vulnerability might pose to the node N31 or network 10.). the false positive log determination unit compares a total number of the security logs that have both the abnormality information and the position information in common (¶0066: For instance, computing resources that are of unknown type (for example, IP address 10.86.54.36 in Figure 6) can be weighted with the greatest weight value (abnormality information) , since they are statistically most likely to pose the highest risk to the node N31 or network 10. Computing resources that are known but not scanned (for example, IP addresses 10.1.21.5 or 10.9.45.21 in Figure 6 ) can be weighted with the next greatest weight value, followed by computing resources where no recent vulnerability has been reported (for example, IP address 10.9.45.21 in Figure 6 ), and computing resources in high risk locations (for example, Internet facing nodes). The computing resources in middle risk locations (for example, extranet facing nodes) can be weighted less than high risk locations, but greater than lower risk locations such as, for example, intranet nodes (position information) . ) with the prediction number of times to determine whether or not the plurality of security logs is a false positive log. (¶0067: If the NS appliance 200 predicts vulnerability for the node N31 (YES at Step 340), then node N31 can be evaluated (Step 350) to determine whether the vulnerability is a false positive (Step 360)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Siddiq regarding to determine false positive log via comparing the amount of abnormal information and position information in common to the method of Galula in view of Endo and Abdelaziz in order to preemptively detect and identifying all vulnerable computing resources (Siddiq ¶0002-0004). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1) , and Mendelowitz et al. (US PGPub No.20220400125-A1). With respect to claim 5, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above) but does not disclose, wherein: the pattern storage unit includes a prediction non-detection position information indicating a position of the abnormality that is predicted not to be detected in the electronic control system due to the maintenance, and a prediction non-detection abnormality information indicating an abnormality that is predicted not to be detected at the position indicated by the prediction non-detection position information; and the false positive log determination unit compares the plurality of security logs with the prediction non-detection abnormality information and the prediction non- detection position information, and determines whether or not the plurality of security logs is the false positive log. However, Mendelowitz teaches wherein: the pattern storage unit includes a prediction non-detection position information indicating a position of the abnormality that is predicted not to be detected in the electronic control system due to the maintenance, and (¶0191: As seen Figure 2B, the tests were conducted using “Ignite dataset” comprising 15,000 clean samples (feature vectors) corresponding to normal events, modes and/or patterns of a vehicle 202 and/or to anomaly modes and/or patterns known vehicle 202. The Ignite dataset also includes 4,565 anomaly samples corresponding to anomaly events, modes and/or patterns which are unknown for the vehicle 202 (is not detected in the electronic control system)). a prediction non-detection abnormality information indicating an abnormality that is predicted not to be detected at the position indicated by the prediction non-detection position information; and (¶0192-0193: The stand-alone autoencoder 230A classified 14,255 of the clean samples to be indeed clean (non-anomaly) meaning that the stand-alone autoencoder 230A detected 14,255 of clean samples. However, the stand-alone autoencoder 230A classified 745 of clean samples as anomalies meaning that the stand-alone autoencoder 230A failed to detect 745 clean samples. ). the false positive log determination unit (¶0054: The trained supervised ML applied to the anomaly events may therefor classify and filter accordingly the false positive detections made by the unsupervised ML which corresponding to known anomalies and identify and classify as anomaly events which are unknown for the vehicle and thus indicative of one or more potential cyberattack events.) compares the plurality of security logs with the prediction non-detection abnormality information and the prediction non- detection position information, and determines whether or not the plurality of security logs is the false positive log. (¶0194-0197: The staged pipeline of the autoencoder 230A followed by the FPR 232A classified 14,905 of the clean samples to be indeed clean detection of 14,905 of the clean samples while classifying 95 of the clean samples as anomalies meaning it failed to detect only 95 clean samples. Computing the parameters of the stand-alone autoencoder 230A, these are: precision: 0.84, recall: 0. 871413, F1 score: 0.86 and false positive rate: 0.16. The parameters computed for the staged pipeline comprising the autoencoder 230A and the FPR 232A are as follows: precision: 0.98, recall: 0.905367, F1 score: 0.94 and false positive rate: 0.02.)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Mendelowitz regarding to determine false positive log via comparing the amount of abnormal information and position information in common to the method of Galula in view of Endo and Abdelaziz in order to significantly improve the detection performance (Mendelowitz ¶0197). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1), and Chen Kaidi et al. (US PGPub No.20220391500-A1). With respect to claim 6, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above) but does not disclose, wherein the processor configured to execute the program further causes the computer to implement: a false positive information assignment unit that is configured to add a false positive information identifying the false positive log to the plurality of security logs, based on a result of the determination; and a transmission unit that is configured to transmit the security log to which the false positive information is added. However, Chen Kaidi teaches further comprising: a false positive information assignment unit that is configured to add a false positive information identifying the false positive log to the plurality of security logs, based on a result of the determination; and (¶0036-0043: However, in other embodiment, application 112 may be behaving validly and may create a false positive, such as if a shell script is executing to perform a valid computing operation. During these activities with service provider server 120, a corresponding computing log may generate and used by service provider server 120 for tuning of a security alert and/or detection of malicious activity). a transmission unit that is configured to transmit the security log to which the false positive information is added. (¶0043: In this regard, a security alert tuner 134 may be executed by security and event management system 130 to tune security alerts 132 so that one or more alerts and/or rules are ready for deployment (e.g., generating an excepted number of alerts of flags without too creating many false positives or missing true positives) In this regard, a sampler 135 may be used to determine a set of computing logs over a time period. In other embodiments, the time period may be automatically selected based on security alerts 132 or as a standard configuration for the corresponding detection system (e.g., using logs from a recent time period to analyze newer threats and computing activities)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chen Kaidi regarding to a false positive information to the method of Galula in view of Endo and Abdelaziz in order to properly identify behavior and not create false positives with valid activities in order to reduce risk (Chen Kaidi ¶0002). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1), and Bertiger et al. (US PGPub No.20230275907-A1). With respect to claim 7, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above),but does not disclose wherein: the false positive log determination unit calculates a degree of matching between the plurality of security logs and the occurrence pattern, and the processor configured to execute the program further causes the computer to implement a provisional information assignment unit that is configured to add to the security logs a provisional information indicating a possibility that the plurality of security logs is likely to be the false positive logs in a case where the degree of matching is higher than a threshold value. However, Bertiger wherein: the false positive log determination unit calculates a degree of matching between the plurality of security logs and the occurrence pattern, and (¶0020-0024: In act 210, the graph representation of the second security incident is compared, in graph structure and/or graph attributes, and in manner that takes attributes of the security events into account, against the stored security incidents to determine of similarity e.g., in form of quantitative similarity scores (act 210). With attributes of the security events being encoded in the graph structure (as separate nodes), graph attributes, or both, this comparison inherently takes that attributes of the security events into account.). the processor configured to execute the program further causes the computer to implement a provisional information assignment unit that is configured to add to the security logs a provisional information indicating a possibility that the plurality of security logs is likely to be the false positive logs in a case where the degree of matching is higher than a threshold value. (¶0020-0026: In some cases, the incident response component 128 may dismiss a security incident 110 as a false positive on the ground that identified similar incidents 110 turned out to be false positives; in other words, the action taken by the incidence response component 128 may be suppression 119 of the incident 110. For example, if the mitigating actions taken in response to the identified similar first security incidents differ, an automated action taken for the second security incident may be based on the action taken for the highest-scoring, most similar first security incident. Alternatively, the selected action for the second security incident may depend on the mitigating action taken most often among the similar fist security incidents. In a notification to a user, data (a provisional information) about all first incidents whose similarity to the second incident exceeds a certain threshold may be included, providing the security analyst with contextual information from which inferences about the second security incident may be drawn; similarity scores may be included to allow the analyst to properly assess the relevance and relative weights of the reported first security incidents which is further illustrated in Figure 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Bertiger regarding to a degree of matching to the method of Galula in view of Endo and Abdelaziz in order to identify false positives that distract analysts from true threats (Bertiger ¶0001 & 0008). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No.20180351980-A1) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1), Bertiger et al. (US PGPub No.20230275907-A1), Trost et al. (US PGPub No.20210126938-A1), and Singh et al. (US PGPub No. 20190005236-A1). With respect to claim 8, the combination Galula in view of Endo, Abdelaziz, Bertiger teaches the device of claim 7 (see rejection of claim 7 above) but does not disclose, wherein: each of the plurality of security logs includes a time information indicating a time when the abnormality has been detected; the pattern storage unit stores a prediction period indicating a period that is predicted from a time when the predicted abnormality is first detected to a time when the predicted abnormality is last detected; and in a case where the degree of the matching does not reach 100% before the prediction period elapses from an earliest time information among the time information, the false positive log determination unit determines that the plurality of security logs is not the false positive log and deletes the provisional information. However, Trost teaches wherein: each of the plurality of security logs includes a time information indicating a time when the abnormality has been detected; (¶0080: As seen in Figure 13 is directed for method for identifying and classifying cyber-security threats, according to some embodiment. Method 1300 includes receiving 1302, a server device, data streams associated with one or more network devices tracking activity on a network . The data streams may be data logs collected by data loggers and stored within an accessible database. When the data streams are received (or retrieved), the method includes identifying a security alert 1304, calculating event sequence time window 1306, generating a related activity score 1308, analyzing meta-data context and generating meta-data context score 1312.). the pattern storage unit stores a prediction period indicating a period that is predicted from a time when the predicted abnormality is first detected to a time when the predicted abnormality is last detected; and (¶0109: In one example, the method include calculating an event time window spanning a first time period before the occurrence of the security alert and a second time after the occurrence of the security alert, the event sequence time window being based on the analyzed metacontext;). in a case where the degree of the matching does not reach 100% before the prediction period elapses from an earliest time information among the time information, (¶0081: The related activity score and the meta-data context score can be score ranging from 0 to 1, with score being indicative of higher correlation and/or relativity. For example, a meta-data context score of 0.7 may indicate that the alert in question may have originated from, or includes meta data that is more likely than not associated with malicious behavior. Similarly, a related activity score of 0.7, for example, may indicate that the activities surrounding the alert (e.g., login data, run executables, etc.) more likely than not, collectively, are associated with malicious activity. In one example, the respective scores may be generated as output from machine learning models that analyze features derived from the respective input data for a set period of time. For instance, the related activity score's model uses features such as: minimum prevalence age of URLs accessed before and after the alert, minimum popularity count of URLs, accessed before and after the alert, count of URLs accessed that fall into known threat intelligence categories like C2 domain/exploit kit, minimum prevalence age of all binaries executed on the host, minimum popularity count of all binaries executed on the host, number of data access anomalies, number of potential data exfiltration anomalies, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Trost regarding time information to the method of Galula in view of Endo, Abdelaziz, and Bertiger in order to detect changes in operational events and quickly and accurately identifying threats (Trost ¶0005 & ¶0046). Galula in view of Endo, Abdelaziz, Bertiger, and Trost does not disclose: the false positive log determination unit determines that the plurality of security logs is not the false positive log and deletes the provisional information. However, Singh teaches the false positive log determination unit determines that the plurality of security logs is not the false positive log and deletes the provisional information. (¶0091: Figure 2, investigating memory segments for suspicious activity, noting false-positive logs, removing informational notice and benign memory segments that were logged and/or memory segments that have been anti-malware application 228, and or the like.) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Singh regarding deleting provisional information to the method of Galula in view of Endo, Abdelaziz, Bertiger, and Trost in order to improve analytics of detecting malware (Singh ¶0030). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1) and Takahashi et al. (US PGPub No.20200204395-A1). With respect to claim 9, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 1 (see rejection of claim 1 above), wherein: the electronic control system and the log determination device are mounted on a movable object. (Galula ¶0030-0033: For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle. Server 210 may be, or may be included in, a security operations center (SOC) that may manage various aspects related to the cyber-security of fleet 220. Vehicles 220 may include an in-vehicle network that includes one or more electronic control units (ECUs) control components in the in-vehicle network and the in-vehicle network may include one or more security enforcement units (SEUs) (implying electronic control unit is mounted in a vehicle which is a moveable object).) Galula in view of Endo and Abdelaziz does not disclose: the log determination device are mounted on a movable object. However, Takahashi teaches log determination device are mounted on a movable object. (¶0079: As seen in Figure 3, illustrating Variation 2 of overall structure of a vehicle 10 (movable object) and in-vehicle network 100. In-vehicle network 100 in Figure 3 includes a node having an anomaly detection function, as compared with in-vehicle networks 100 in Figure 1 and 2. The node having an anomaly detection function is hereafter also referred to as an IDS ECU (log determination device). IDS ECU 120 performs anomaly detection on a message flowing in bus 130, and, and upon detecting an anomaly, notifies anomaly detection devices 110a, 110b, 110d, and 110f in the in-vehicle network of the information. IDS ECU 120 is hereafter also called a second ECU, to distinguish it from an ECU (first ECU) connected to bus 130 via an anomaly detection device.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Takahashi regarding the electronic control system to the method of Galula in view of Endo and Abdelaziz in in order to easily detect anomalies and unauthorized use (Takahashi ¶0015 &¶0047). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Galula et al. (US PGPub No. 20180351980-A1 ) in view of Endo et al. (US PGPub No.20230054840-A1), Abdelaziz et al. (US PGPub No. 20200089848-A1), Ford et al. (US PGPub No. 20210351973-A1), and Komano et al. (US PGPub No. 20170139795-A1 ). With respect to claim 20, the combination of Galula in view of Endo and Abdelaziz teaches the device of claim 18 (see rejection of claim 18 above) wherein: the plurality of electronic control units each includes a security sensor configured to detect the abnormality, (Galula ¶0030: A DCU 221 may be any applicable unit, e.g., a sensor adapted to obtain information. For example, a DCU 221 may be a sensor or other unit (e.g., a sniffer) adapted to capture messages or packets communicated over an in-vehicle network (e.g., Controller Area Network (CAN) or Ethernet packets, messages or frames) or a DCU 221 may be a sensor or component adapted to obtain information from electronic control units (ECUs) in a vehicle.) and each security log further includes: [a time stamp] indicating a time when the abnormality has been detected; and (Galula ¶0047: Aggregation may include coupling, grouping or classifying logs related to devices' pairing at the time of an attack (abnormality) across multiple vehicles to identify common devices (e.g., smartphone, dongle etc.).); Galula in in view of Endo and Abdelaziz does not disclose: a time stamp indicating a time when the abnormality has been detected; and Although, Galula discloses pairing of a time to an attack, but the prior art does not explicitly disclose a time stamp. However, Ford teaches time stamp indicating a time when the abnormality has been detected; (¶0130: UCI messages may include a set of anomalies detected in the data variables by the AI engine 408. The AD system, described in detail below, may determine that a network device is functioning abnormally based on the PM/FM/CM/log data. The AI engine 408 may indicate to the user client the identifiers of the abnormal devices, along with the specific KPIs or other data variables that were found to be abnormal and the timestamps of when the abnormal performance occurred and/or was detected.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ford regarding a timestamp to the method of Galula in view of Endo and Abdelaziz in in order to better detect the root cause for anomaly (Ford ¶0006). Galula in view of Endo, Abdelaziz, and Ford does not disclose: a counter indicting how many times the abnormality has been detected. However, Komano teaches a counter indicting how many times the abnormality has been detected. (¶0038: That is, the determination unit 140 first sets the abnormality counter held in the storage 110 to zero when starting testing of the pre-shared key for an ECU 20 and increments the abnormality counter (+1) every time when the verification result receiver 122 receives the verification result data indicating Verification Failure (abnormality has been detected).); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Komano regarding a counter to the method of Galula in view of Endo, Abdelaziz, and Ford in in order to prevent malicious activity such as corruption and tampering and ensure validity (Komano ¶0004-0005). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAYLOR P VU whose telephone number is (703)756-1218. The examiner can normally be reached MON - FRI (7:30 - 5:00). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.P.V./ Examiner, Art Unit 2437 /ALEXANDER LAGOR/ Supervisory Patent Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Sep 25, 2023
Application Filed
May 30, 2025
Response after Non-Final Action
Jun 30, 2025
Non-Final Rejection — §103, §112
Sep 24, 2025
Applicant Interview (Telephonic)
Sep 24, 2025
Examiner Interview Summary
Oct 02, 2025
Response Filed
Jan 16, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506662
SERVICE PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12505223
System & Method for Detecting Vulnerabilities in Cloud-Native Web Applications
2y 5m to grant Granted Dec 23, 2025
Patent 12491837
ELECTRONIC SIGNAL BASED AUTHENTICATION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Dec 09, 2025
Patent 12411931
FUEL DISPENSER AUTHORIZATION AND CONTROL
2y 5m to grant Granted Sep 09, 2025
Patent 12399979
PROVISIONING A SECURITY COMPONENT FROM A CLOUD HOST TO A GUEST VIRTUAL RESOURCE UNIT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month