Prosecution Insights
Last updated: April 18, 2026
Application No. 17/809,732

TECHNIQUES FOR DETECTING EXPLOITATION OF MEDICAL DEVICE VULNERABILITIES

Non-Final OA §103
Filed
Jun 29, 2022
Examiner
ZARRINEH, SHAHRIAR
Art Unit
2496
Tech Center
2400 — Computer Networks
Assignee
Armis Security Ltd.
OA Round
7 (Non-Final)
79%
Grant Probability
Favorable
7-8
OA Rounds
2y 8m
To Grant
87%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
341 granted / 433 resolved
+20.8% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
59 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
52.2%
+12.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In communications filed on 11/17/2025. Claims 1, and 10-11 are amended. Claims 5-8, 15, and 18 are cancelled. Claims 1-4, 9-14, 16-17, and 19-20 are pending in this examination. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This examination is in response to US Patent Application No. 17/809,732. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission has been entered. Terminal Disclaimer The terminal disclaimer filed on 03/13/2015 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of U.S. Patent application No. 11481503 has been reviewed and is accepted. The terminal disclaimer has been recorded. Response to Arguments Applicant's arguments filed 11/17/2025 have been fully considered but they are not persuasive: Applicant submits on pages 7-8 of remarks filed on 11/17/2025 regarding Jha, Ponnuswamy, and Ansari, either alone or in combination, fail to disclose at least the above-emphasized language of claim 1: "creating a device profile for a medical device, the device profile created using a classifier hierarchy to sequentially apply a plurality of sub-models to a plurality of extracted features from sensor data of at least one sensor deployed as an out-of-band device, wherein the classifier hierarchy comprises a plurality of levels, wherein applying the plurality of sub-models to the plurality of extracted features comprises determining a next sub-model to apply based on a class output by a most recently applied sub-model of the classifier hierarchy, and wherein the device profile comprises a device attribute for the medical device, the device attribute relating to a communication protocol or connection between the medical device and another device" (Emphasis supplied). Examiner respectfully disagrees with applicant argument for claim 1 filed on 11/17/2025 on pages 7-8 of remarks. The combination of Jha, Ponnuswamy, and Ansari discloses: Jha discloses from sensor data of at least one sensor deployed as an out-of-band device: [ ¶5, medical advances as well as innovations in ultra-low-power computing, networking, and sensing technologies have led to an explosion in implantable and wearable medical devices (IWMDs)… A Personal Healthcare System (PHS) typically includes sensors for physiological data collection, actuators for therapy delivery, remote controllers for reconfiguration, and a hub for logging, compressing, and analyzing the raw health data], and [¶¶89-90, FIG. 1 is a block diagram of a system 10 including a medical device security monitor (MedMon) 20. The system includes a first medical device 30 that is associated with a patient, e.g., the device may be implanted within or worn by the patient a patient. The first medical device 30 generally includes a communication interface 32. The system also includes a second device 40 that generally includes a communication interface 42. The second device 42 may be an external programmer or other device configured for communication with the first medical device 30. The MedMon 20 is also configured with a communication interface 22. Wireless communications between the MedMon 20, first medical device 30 and second device 40 is generally shown by dashed lines 50. It should be understood that a wide variety of wireless communications techniques may be used without departing from the scope of this disclosure including the approaches discussed above such as Bluetooth (e.g., IEEE 802.15), Zigbee (e.g., IEEE 802.15.4), WiFi (e.g., IEEE 802.11), near field communication (e.g., ISO/IEC 14443) and the like. In operation, the MedMon 20 snoops on all communications between the first medical device 30 and the second device 40 and analyzes said communications for compliance to a set of security policies. [0090] FIG. 2 is a block diagram of a MedMon 20 including a communication interface 22, an anomaly detector 24, response generator 26, and security policies 28. It should be understood that the MedMon 20 will generally be implemented on a device that includes a processor and memory as generally shown by block 21. The communication interface 22 snoops on the communications to/from the medical device (e.g., 30 in FIG. 1). The anomaly detector 24 analyzes the communications with respect to the security policies and notifies the response generator once an anomaly is detected. The response generator produces a suitable response, e.g., generating a warning message for the user or generating a jamming signal through the communication interface such that the anomalous communication to the medical device is disrupted], and [ see FIG 6a and corresponding text for more detail, ¶129, as shown in FIG. 6a, includes a manual glucose meter 240, insulin pump 230, remote control 250, an attacker 210 and the MedMon 220. The attacker 210 and MedMon are at least partially implemented using Universal Software Radio Peripheral (USRP) boards, e.g., available from Ettus Research, http://www.ettus.com. The USRP is an off-the-shelf software radio platform. It can intercept radio communications within a frequency band and generate wireless signals with different frequency, modulation, and power configurations], and [¶135, In the insulin delivery system, there exist several wireless links: the link from the sensor to the pump to continuously transmit glucose data, the link from the manual meter to the pump to transmit glucose data (the messages on this link are manually triggered), and the link from the remote control to the pump to transmit control commands. All three links can be exploited by an attacker], and [¶¶40, 59, 146]. Furthermore, Ponnuswamy discloses: [¶25, One or more embodiments include determining a target device profile including an expected behavior for a target device. The target device profile is determined by applying an unsupervised machine learning algorithm to different datasets. First, a global dataset includes multiple sets of device data, each set of device data including device attributes and behaviors corresponding to a different client device. The unsupervised machine learning algorithm is applied to the global dataset to determine clusters within the global dataset (equated to applying first sub-model). Second, the global dataset is divided into multiple device type datasets, based on the device type associated with each set of device data (equated to sequentially applying second sub-model). The global dataset may be divided into the device type datasets using a classifier function determined via a supervised machine learning algorithm. After dividing the global dataset into device type datasets, the unsupervised machine learning algorithm is applied to each device type dataset to determine clusters within each device type dataset], and [¶¶60-61, In one or more embodiments, a supervised machine learning algorithm 214 (also referred to as a “supervised learning algorithm”) is configured to determine a classifier function 216 based on a training dataset. A supervised machine learning algorithm 214 may be implemented using one or more algorithms well-known in the art, such as support vector machine (SVM), neural networks, pattern recognition, and/or Bayesian statistics. In one or more embodiments, a classifier function 216 determines classifications 218 for one or more sets of device data. A classifier function 216 is applied to a set of device data to determine a particular class, of a candidate set of classes, for the set of device data. Each class includes device data associated with at least one common device attribute. In an embodiment, each class includes device data associated with the same device type. Examples of device types include Apple iPhone 7, Apple iPhone 6, and Samsung Galaxy S6. In other embodiments, each class includes device data associated with another common device attribute, such as the same manufacturer, the same operating channel, and/or the same multiple-input and multiple-output (MIMO) setting. As illustrated, a classifier function 216 divides a global dataset 212 into multiple device type datasets 220a-b], and [¶¶69-70, Initially, the labeled global dataset 202 may be a bootstrap dataset that is used to initiate the supervised machine learning system. Subsequently, the labeled global dataset 202 may be expanded and/or modified based on device data collected for client devices in a communication network. The labeled global dataset 202 may be expanded and/or modified, in real-time, using device data collected for devices in a communication network as the device data is being collected… global dataset, with the added labels, is added to the labeled global dataset; expanded labeled global dataset input into the supervised machine learning algorithm; supervised machine learning algorithm modifies and/or refines the classifier function based on the new device data of the global dataset; labeled global dataset is iteratively expanded and/or modified, such that the accuracy of the classifier function is iteratively improved [¶¶ 126-127, profile builder selects at least a subset of the cluster groups obtained from a device type dataset as relevant cluster groups; selection of relevant cluster groups, from the cluster groups obtained from a device type dataset, is based on the attributes that are known for the target device; (subset of group analogous to indicated sub-models… In the second phase, the profile builder 406 analyzes each selected cluster group to identify clusters that share at least one device attribute with the target device. Clusters that share at least one device attribute with the target device are referred to as “relevant clusters.” The profile builder 406 does not analyze any non-selected cluster groups for relevant clusters. The relevant clusters may include (a) clusters obtained from one or more device type datasets and/or (b) clusters obtained from the global dataset], and [¶180, A classifier function is applied to the global dataset. The classifier function determines that Device Data #01 and Device Data #02 are associated with the device type, Apple iPhone 7. The classifier function determines that Device Data #03 is associated with the device type, Samsung Galaxy S6] . And furthermore, Ansari discloses: [ see FIG. 7 and corresponding text for more details, Col. 10 lines 55-67, Col. 11 lines 1-23, In some embodiments, a given machine learning problem may be solved using a collection of models rather than a single model, e.g., arranged in a pipeline or sequence in which successor models are used for finer-grained predictions or classifications than predecessor models. Such pipelines may present additional opportunities for interactive exploration and analysis, as the decisions made at some pipeline stages may be dependent on earlier decisions which may not necessarily be exposed by default to the users of the pipelines. FIG. 7 illustrates example information which may be displayed with respect to a multi-phase machine learning model pipeline, according to at least some embodiments. In the depicted embodiment, a set of pipelined machine learning models 701 may comprise a phase 1 model 702, a phase 2 model 704, and a phase 3 model 706. In an embodiment in which the three models are used to perform successively higher granularities of classification, phase 1 model 702A may, for example, comprise a broadest-level classifier. The phase 2 model 704 may perform more fine-grained classification than the phase 1 model 702, and the particular classifier 704 to be used may in some implementations be selected based at least in part on the phase 1 classification result 722A. Similarly, the phase 3 model 706 may be used for classification at an even higher granularity than the phase 2 model 704 in at least some embodiments, based at least in part on the phase 2 result 722B, eventually providing the phase 3 result 722C. Consider a scenario in which an input record may be classified at several levels of granularity: at the broadest level, into classes A, B, C or D. Then, at the second level, if the broadest class is A, the record may be classified (using a second level classifier 704) further into one of four second-level classes AA, AB, AC and AD. Similarly, if the broadest class was B, a second-level classifier specifically for instances of B may be used to classify the record into one of four other second-level classes BA, BB, BC or BD, and so on]. Examiner maintains the rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 9-11, 13-14, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2013/0247194) issued to Jha (filed in IDS 06/29/2022) and in view of US Patent No. (US20180225592) issued to Ponnuswamy, and further in view of US Patent No. (US11593700) issued to Ansari. Regarding claims 1, and 10-11, Jha disclose A method for detecting medical device exploitable vulnerabilities, comprising : creating a device profile for a medical device, wherein the device profile comprises a device attribute for the medical device, the device attribute relating to a communication protocol or connection between the medical device and another device; identifying, in a vulnerabilities database, a device entry corresponding to devices having the device attribute, wherein the vulnerabilities database comprises: the device entry; and a plurality of known exploits associated with the device entry, wherein each known exploit of the plurality of known exploits comprises an exploitation condition that indicates a condition required for implementing the respective known exploit on devices having the device attribute; determining, from the vulnerabilities database, a first exploitation condition of a first known exploit associated with the device entry [¶7, Due to the absence of cryptographic protection, the wireless channel has been identified as the Achilles' heel of medical devices. Recent demonstrations of successful RF wireless attacks (communication vulnerability) on cardiac pacemakers and insulin pumps have placed medical device security under great scrutiny. For example, an attack described on a glucose monitoring and insulin delivery system may exploit the wireless channels between the device and controller, and between medical devices. In such a scenario, the attacker first eavesdrops on the wireless packets sent from a remote control to an insulin pump. From the captured packets, the attacker reverse-engineers the device PINs associated with the remote control and glucose meter. By mimicking the remote control, the attacker can configure the insulin pump to disable or change the intended therapy, stop the insulin injection, or inject a much higher dose than allowed. By mimicking the glucose meter, the attacker can send bogus data to the insulin pump, causing the pump to adjust insulin delivery based on the false data. In addition, the attacker can snoop on the packets to infer sensitive patient data], and [¶32, Hardware failure: can be an electronic component such as: transient errors caused by the complex physical environment (e.g., noise, power disturbance, extreme temperature, vibration, electromagnetic interference, etc.). Studies have shown that electromagnetic interference may cause temporary or permanent malfunction in pacemakers and implantable cardioverter defibrillators (ICDs). Examiner maintains his rejection… electromagnetic interference, if strong enough, could also change the parameter values of the neurostimulator, turn the neurostimulator off, or cause it to jolt the patient], and [¶48, Side-channel attacks employ statistical analysis of information leaked through physical channels (communication attack), such as power consumption, execution time, electromagnetic emission, etc. Side-channel attacks can possibly be used against PHSs (personal healthcare systems) and medical devices for privacy invasion], and [see table 1, ¶60, example of several PHSs and medical devices and their associated threats and risks (equated to a vulnerability database. If the attacker has knowledge of the wireless communication protocol used, the communication between a medical device and another easily reveals its presence. So does a response to a communication request sent by an attacker. Examples of several PHSs and their associated threats and risks are listed in Table I. TABLE-US-00001 TABLE I THREATS AND RISKS IN PHSs System/Device Threat Risk Pacemaker software/hardware error I, A and ICD side-channel C, I, A, PC, I, A, P attack Insulin software/hardware error I, A C, P pump side-channel C, I, A, P attack Deep brain software/hardware error I, A C, P stimulation side-channel C, I, A, P attack Intrathecal software/hardware error I, A C, P drug side-channel C, I, A, P delivery attack Retina software/hardware error I, A sensor Cochlear software/hardware error I, An implant Fall software/hardware error I, A detector wireless attack C, I, A, P Epilepsy software/hardware error I, A detector Health Malware and C, I, A, PC, P Guide vulnerability exploit side-channel attack Heart rate wireless attack C, I, A, P monitor Column 1 corresponds to PHSs and medical devices. Column 2 corresponds to the threat types they are prone to. Column 3 corresponds to the risks associated with each system with respect to the particular threat. Confidentiality, integrity, availability, and privacy risks are referred to C, I, A, and P, respectively], and [¶89, FIG. 1 is a block diagram of a system 10 including a medical device security monitor (MedMon) 20. The system includes a first medical device 30 that is associated with a patient, e.g., the device may be implanted within or worn by the patient a patient. The first medical device 30 generally includes a communication interface 32. The system also includes a second device 40 that generally includes a communication interface 42. The second device 42 may be an external programmer or other device configured for communication with the first medical device 30. The MedMon 20 is also configured with a communication interface 22. Wireless communications between the MedMon 20, first medical device 30 and second device 40 is generally shown by dashed lines 50. It should be understood that a wide variety of wireless communications techniques may be used without departing from the scope of this disclosure including the approaches discussed above such as Bluetooth (e.g., IEEE 802.15), Zigbee (e.g., IEEE 802.15.4), WiFi (e.g., IEEE 802.11), near field communication (e.g., ISO/IEC 14443) and the like. In operation, the MedMon 20 snoops on all communications between the first medical device 30 and the second device 40 and analyzes said communications for compliance to a set of security policies], and [¶90]. PNG media_image1.png 485 416 media_image1.png Greyscale Examiner Note: as indicated throughout Jha application and table I(database), there are many medical devices with different attributes (names and functionality), threat and Risk (vulnerabilities and exploitation condition) which communicate with each other through wireless connection. Some of the communication /connection vulnerabilities are: 1). wireless attack: for example, wireless attack on cardiac pacemakers and insulin pumps have placed medical device security under great scrutiny. For example, an attack described on a glucose monitoring and insulin delivery system may exploit the wireless channels between the device and controller, and between medical devices and etc. 2). Side-channel attacks which employ statistical analysis of information leaked through physical channels. 3). Hardware failure: can be an electronic component such as: transient errors caused by the complex physical environment (e.g., noise, power disturbance, extreme temperature, vibration, electromagnetic interference, etc.). Studies have shown that electromagnetic interference may cause temporary or permanent malfunction in pacemakers and implantable cardioverter defibrillators (ICDs)]; and analyzing behavior and configuration of the medical device to detect an exploitable vulnerability for the medical device, wherein the exploitable vulnerability is a behavior or configuration of the medical device which meets the first exploitation condition [¶46, Furthermore, since software is inherently complex, abstract and intangible, software vulnerabilities are inevitable and difficult to detect. In an incident of buffer overflow, the corrupted memory could originally be holding an address to an instruction, which the program should be redirected to. After corruption of the address, the program may be redirected to a false address and start executing random code. If a buffer overflow is triggered by especially-crafted user inputs, causing the redirected program to execute malicious code, it is called a buffer overflow attack. With some knowledge of system software, attackers can exploit the buffer overflow vulnerabilities as well as other software vulnerabilities to steal private information, tamper with medical data and even change device settings (configuration)], and [¶48, other attack scenarios are possible as well. Suppose communications between implanted pacemakers and external programmers are encrypted, and the same secret key is shared by substantially all pacemakers of the same model so that the ambulance staff can access the device in case of an emergency. If an attacker has access to a pacemaker unit, the secret key can become a vulnerable target for differential power analysis, a form of side-channel attack that utilizes power consumption information. Once successful, the attacker could reveal and publicize the secret key and thus make the cryptographic protection ineffectual], and [¶51, a PHS that is impacted by attacks or malfunctions can lead to different types of risks. Ensuring the safety of PHSs involves protection against each type of potential risk. PHS security shares the high-level goals of traditional information security: confidentiality, integrity and availability. In addition, privacy is another useful goal for PHSs and medical devices. Privacy involves keeping the presence of the device on the patient confidential. Correspondingly, a PHS can be subject to four types of potential risks: confidentiality, integrity, availability, and privacy], and [see table 1, ¶60, example of several PHSs and medical devices and their associated threats and risks (equated to a vulnerability database], and [¶¶52-60, Abstract]; and and performing at least one mitigation action based on the exploitable vulnerability [Abstract, A response generator configured to generate a response on a condition that an anomaly is detected. The response may be a warning message configured to warn the patient. The MedMon may include a transmitter configured to transmit the response. The response may be a jamming signal configured to disrupt communications between the first medical device and second device], and [¶11], and [¶69, Cryptography is one approach for securing the wireless communication channel and preventing unauthorized access. It can protect device integrity as well as data confidentiality], and [¶70, Another straightforward key-distribution solution is to ask patients to carry cards or bracelets imprinted with the secret keys of their devices]. creating a device profile for a medical device, the device profile created using a classifier hierarchy to sequentially apply a plurality of sub-models to a plurality of extracted features from sensor data of at least one sensor deployed as an out-of-band device, wherein the classifier hierarchy comprises a plurality of levels, wherein applying the plurality of sub-models to the plurality of extracted features comprises determining a next sub-model to apply based on a class output by a most recently applied sub-model of the classifier hierarchy The combination of Jha, Ponnuswamy, and Ansari discloses: Jha discloses from sensor data of at least one sensor deployed as an out-of-band device: [ ¶5, medical advances as well as innovations in ultra-low-power computing, networking, and sensing technologies have led to an explosion in implantable and wearable medical devices (IWMDs)… A Personal Healthcare System (PHS) typically includes sensors for physiological data collection, actuators for therapy delivery, remote controllers for reconfiguration, and a hub for logging, compressing, and analyzing the raw health data], and [¶¶89-90, FIG. 1 is a block diagram of a system 10 including a medical device security monitor (MedMon) 20. The system includes a first medical device 30 that is associated with a patient, e.g., the device may be implanted within or worn by the patient a patient. The first medical device 30 generally includes a communication interface 32. The system also includes a second device 40 that generally includes a communication interface 42. The second device 42 may be an external programmer or other device configured for communication with the first medical device 30. The MedMon 20 is also configured with a communication interface 22. Wireless communications between the MedMon 20, first medical device 30 and second device 40 is generally shown by dashed lines 50. It should be understood that a wide variety of wireless communications techniques may be used without departing from the scope of this disclosure including the approaches discussed above such as Bluetooth (e.g., IEEE 802.15), Zigbee (e.g., IEEE 802.15.4), WiFi (e.g., IEEE 802.11), near field communication (e.g., ISO/IEC 14443) and the like. In operation, the MedMon 20 snoops on all communications between the first medical device 30 and the second device 40 and analyzes said communications for compliance to a set of security policies. [0090] FIG. 2 is a block diagram of a MedMon 20 including a communication interface 22, an anomaly detector 24, response generator 26, and security policies 28. It should be understood that the MedMon 20 will generally be implemented on a device that includes a processor and memory as generally shown by block 21. The communication interface 22 snoops on the communications to/from the medical device (e.g., 30 in FIG. 1). The anomaly detector 24 analyzes the communications with respect to the security policies and notifies the response generator once an anomaly is detected. The response generator produces a suitable response, e.g., generating a warning message for the user or generating a jamming signal through the communication interface such that the anomalous communication to the medical device is disrupted], and [ see FIG 6a and corresponding text for more detail, ¶129, as shown in FIG. 6a, includes a manual glucose meter 240, insulin pump 230, remote control 250, an attacker 210 and the MedMon 220. The attacker 210 and MedMon are at least partially implemented using Universal Software Radio Peripheral (USRP) boards, e.g., available from Ettus Research, http://www.ettus.com. The USRP is an off-the-shelf software radio platform. It can intercept radio communications within a frequency band and generate wireless signals with different frequency, modulation, and power configurations], and [¶135, In the insulin delivery system, there exist several wireless links: the link from the sensor to the pump to continuously transmit glucose data, the link from the manual meter to the pump to transmit glucose data (the messages on this link are manually triggered), and the link from the remote control to the pump to transmit control commands. All three links can be exploited by an attacker], [¶¶40, 59, 146]. Furthermore, Ponnuswamy discloses: [¶25, One or more embodiments include determining a target device profile including an expected behavior for a target device. The target device profile is determined by applying an unsupervised machine learning algorithm to different datasets. First, a global dataset includes multiple sets of device data, each set of device data including device attributes and behaviors corresponding to a different client device. The unsupervised machine learning algorithm is applied to the global dataset to determine clusters within the global dataset (equated to applying first sub-model). Second, the global dataset is divided into multiple device type datasets, based on the device type associated with each set of device data (equated to sequentially applying second sub-model). The global dataset may be divided into the device type datasets using a classifier function determined via a supervised machine learning algorithm. After dividing the global dataset into device type datasets, the unsupervised machine learning algorithm is applied to each device type dataset to determine clusters within each device type dataset], and [¶¶60-61, In one or more embodiments, a supervised machine learning algorithm 214 (also referred to as a “supervised learning algorithm”) is configured to determine a classifier function 216 based on a training dataset. A supervised machine learning algorithm 214 may be implemented using one or more algorithms well-known in the art, such as support vector machine (SVM), neural networks, pattern recognition, and/or Bayesian statistics. In one or more embodiments, a classifier function 216 determines classifications 218 for one or more sets of device data. A classifier function 216 is applied to a set of device data to determine a particular class, of a candidate set of classes, for the set of device data. Each class includes device data associated with at least one common device attribute. In an embodiment, each class includes device data associated with the same device type. Examples of device types include Apple iPhone 7, Apple iPhone 6, and Samsung Galaxy S6. In other embodiments, each class includes device data associated with another common device attribute, such as the same manufacturer, the same operating channel, and/or the same multiple-input and multiple-output (MIMO) setting. As illustrated, a classifier function 216 divides a global dataset 212 into multiple device type datasets 220a-b], and [¶¶69-70, Initially, the labeled global dataset 202 may be a bootstrap dataset that is used to initiate the supervised machine learning system. Subsequently, the labeled global dataset 202 may be expanded and/or modified based on device data collected for client devices in a communication network. The labeled global dataset 202 may be expanded and/or modified, in real-time, using device data collected for devices in a communication network as the device data is being collected… global dataset, with the added labels, is added to the labeled global dataset; expanded labeled global dataset input into the supervised machine learning algorithm; supervised machine learning algorithm modifies and/or refines the classifier function based on the new device data of the global dataset; labeled global dataset is iteratively expanded and/or modified, such that the accuracy of the classifier function is iteratively improved [¶¶ 126-127, profile builder selects at least a subset of the cluster groups obtained from a device type dataset as relevant cluster groups; selection of relevant cluster groups, from the cluster groups obtained from a device type dataset, is based on the attributes that are known for the target device; (subset of group analogous to indicated sub-models… In the second phase, the profile builder 406 analyzes each selected cluster group to identify clusters that share at least one device attribute with the target device. Clusters that share at least one device attribute with the target device are referred to as “relevant clusters.” The profile builder 406 does not analyze any non-selected cluster groups for relevant clusters. The relevant clusters may include (a) clusters obtained from one or more device type datasets and/or (b) clusters obtained from the global dataset], and [¶180, A classifier function is applied to the global dataset. The classifier function determines that Device Data #01 and Device Data #02 are associated with the device type, Apple iPhone 7. The classifier function determines that Device Data #03 is associated with the device type, Samsung Galaxy S6], It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jha with the teaching of Ponnuswamy in order for determining Expected Behaviors for a Target Device Based on the Global Dataset and the Device Type Datasets where the classifier function determines a device type associated with the device data of the target device [Ponnuswamy, ¶121, 124, Abstract] .And furthermore, Ansari discloses: [ see FIG. 7 and corresponding text for more details, Col. 10 lines 55-67, Col. 11 lines 1-23, In some embodiments, a given machine learning problem may be solved using a collection of models rather than a single model, e.g., arranged in a pipeline or sequence in which successor models are used for finer-grained predictions or classifications than predecessor models. Such pipelines may present additional opportunities for interactive exploration and analysis, as the decisions made at some pipeline stages may be dependent on earlier decisions which may not necessarily be exposed by default to the users of the pipelines. FIG. 7 illustrates example information which may be displayed with respect to a multi-phase machine learning model pipeline, according to at least some embodiments. In the depicted embodiment, a set of pipelined machine learning models 701 may comprise a phase 1 model 702, a phase 2 model 704, and a phase 3 model 706. In an embodiment in which the three models are used to perform successively higher granularities of classification, phase 1 model 702A may, for example, comprise a broadest-level classifier. The phase 2 model 704 may perform more fine-grained classification than the phase 1 model 702, and the particular classifier 704 to be used may in some implementations be selected based at least in part on the phase 1 classification result 722A. Similarly, the phase 3 model 706 may be used for classification at an even higher granularity than the phase 2 model 704 in at least some embodiments, based at least in part on the phase 2 result 722B, eventually providing the phase 3 result 722C. Consider a scenario in which an input record may be classified at several levels of granularity: at the broadest level, into classes A, B, C or D. Then, at the second level, if the broadest class is A, the record may be classified (using a second level classifier 704) further into one of four second-level classes AA, AB, AC and AD. Similarly, if the broadest class was B, a second-level classifier specifically for instances of B may be used to classify the record into one of four other second-level classes BA, BB, BC or BD, and so on]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jha, and Ponnuswamy with the teaching of Ansari in order for using a collection of models rather than a single model, e.g., arranged in a pipeline or sequence in which successor models are used for finer-grained predictions or classifications than predecessor models [Ansari, Col. 10 lines 55-67, Col. 11 lines 1-23]. Regarding claims 3, and 13, Jha discloses wherein the medical device is any of a medical imaging device, a diagnostic device, life support equipment, a pump, a defibrillator, and a pacemaker [¶7, Due to the absence of cryptographic protection, the wireless channel has been identified as the Achilles' heel of medical devices. Recent demonstrations of successful RF wireless attacks on cardiac pacemakers and insulin pumps have placed medical device security under great scrutiny], and [¶34, Many medical devices perform life-sustaining functions, such as cardiac pacing and defibrillation]. Regarding claims 4, and 14, Jha discloses further comprising: querying a vulnerability scanner based on the analyzed behavior and configuration of the medical device, wherein the exploitable vulnerability is detected based further on a response of the vulnerability scanner to the query [¶¶92-93, the MedMon may be trained and configured first in order to learn the characteristics of normal behavior. For both training and actual use, it must be placed at a fixed position relative to the IWMD. When in use, the MedMon quietly monitors communications among the different components of a PHS. It searches for anomalies in transmitted signals to determine whether a wireless attack is being launched against the PHS. When anomalies are identified, indicating a possible attack, the MedMon can respond passively or actively, depending on its configuration for this type of anomaly or attack. FIG. 4a is a block diagram showing a passive mode configuration where: once an attack is identified the MedMon is configured to provide a warning to the patient. FIG. 4b is a block diagram showing an active mode configuration where: once an attack is identified the MedMon is configured to jam communications]. Regarding claims 9, and 19, Jha does not explicitly disclose, however, Ponnuswamy discloses, wherein each classifier is trained to output a class and a confidence score, wherein the class output by each sub-model is determined based on a class and the confidence score output by each classifier of the sub-mode [¶28-¶ 34: particular behavior that is common to multiple relevant clusters is identified; values, for particular behavior, indicated by each such relevant cluster are identified; a weight, corresponding to each value, is determined; weight determined based on various factors, such as: (a) attribute type shared between a particular relevant cluster and the target device; (b) behavior type of the common behavior; (c) number of device attributes that are shared; (d) a number of devices in a particular relevant cluster; (e) particular time period in which device data was collected; and (f) a correlation strength between attributes and behaviors associated with a particular relevant cluster], and [ ¶ 067, classifier function determines that a particular set of device data corresponds to device type], and [ ¶ 068, global dataset is a training dataset for input into a supervised machine learning algorithm; labeled global dataset includes multiple sets of device data, and labels corresponding to each set of device data], and [¶¶35, 67-68, 84,88, 130]. Regarding claim 16, Jha does not explicitly disclose, however, Ponnuswamy discloses wherein the sequential application ends with applying a last sub-model of the plurality of sub-models, wherein the first device attribute is determined based on an output of the last sub-model of the plurality of sub-models [¶25, The global dataset may be divided into the device type datasets using a classifier function determined via a supervised machine learning algorithm. After dividing the global dataset into device type datasets, the unsupervised machine learning algorithm is applied to each device type dataset to determine clusters(sub-model) within each device type dataset], and [¶¶ 126-127, profile builder selects at least a subset of the cluster groups obtained from a device type dataset as relevant cluster groups; selection of relevant cluster groups, from the cluster groups obtained from a device type dataset, is based on the attributes that are known for the target device; (subset of group analogous to indicated sub-models… In the second phase, the profile builder 406 analyzes each selected cluster group to identify clusters that share at least one device attribute with the target device. Clusters that share at least one device attribute with the target device are referred to as “relevant clusters.” The profile builder 406 does not analyze any non-selected cluster groups for relevant clusters. The relevant clusters may include (a) clusters obtained from one or more device type datasets and/or (b) clusters obtained from the global dataset]. And furthermore, Ansari discloses: [ see FIG. 7 and corresponding text for more details, Col. 10 lines 55-67, Col. 11 lines 1-23, In some embodiments, a given machine learning problem may be solved using a collection of models rather than a single model, e.g., arranged in a pipeline or sequence in which successor models are used for finer-grained predictions or classifications than predecessor models. Such pipelines may present additional opportunities for interactive exploration and analysis, as the decisions made at some pipeline stages may be dependent on earlier decisions which may not necessarily be exposed by default to the users of the pipelines. FIG. 7 illustrates example information which may be displayed with respect to a multi-phase machine learning model pipeline, according to at least some embodiments. In the depicted embodiment, a set of pipelined machine learning models 701 may comprise a phase 1 model 702, a phase 2 model 704, and a phase 3 model 706. In an embodiment in which the three models are used to perform successively higher granularities of classification, phase 1 model 702A may, for example, comprise a broadest-level classifier. The phase 2 model 704 may perform more fine-grained classification than the phase 1 model 702, and the particular classifier 704 to be used may in some implementations be selected based at least in part on the phase 1 classification result 722A. Similarly, the phase 3 model 706 may be used for classification at an even higher granularity than the phase 2 model 704 in at least some embodiments, based at least in part on the phase 2 result 722B, eventually providing the phase 3 result 722C. Consider a scenario in which an input record may be classified at several levels of granularity: at the broadest level, into classes A, B, C or D. Then, at the second level, if the broadest class is A, the record may be classified (using a second level classifier 704) further into one of four second-level classes AA, AB, AC and AD. Similarly, if the broadest class was B, a second-level classifier specifically for instances of B may be used to classify the record into one of four other second-level classes BA, BB, BC or BD, and so on. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jha, and Ponnuswamy with the teaching of Ansari in order for using a collection of models rather than a single model, e.g., arranged in a pipeline or sequence in which successor models are used for finer-grained predictions or classifications than predecessor models [Ansari, Col. 10 lines 55-67, Col. 11 lines 1-23]. Regarding claim 17, Jha, Ansari do not explicitly disclose , however, Ponnuswamy discloses, wherein each sub-model outputs a device attribute when applied to at least a portion of the plurality of features extracted from the device activity data, wherein the device attribute is determined based on the device attribute class output by the last sub-model of the plurality of sub-models [¶61, In one or more embodiments, a classifier function 216 determines classifications 218 for one or more sets of device data. A classifier function 216 is applied to a set of device data to determine a particular class, of a candidate set of classes, for the set of device data. Each class includes device data associated with at least one common device attribute. In an embodiment, each class includes device data associated with the same device type. Examples of device types include Apple iPhone 7, Apple iPhone 6, and Samsung Galaxy S6. In other embodiments, each class includes device data associated with another common device attribute, such as the same manufacturer, the same operating channel, and/or the same multiple-input and multiple-output (MIMO) setting. As illustrated, a classifier function 216 divides a global dataset 212 into multiple device type datasets 220a-b], and [¶ 070, global dataset, with the added labels, is added to the labeled global dataset; expanded labeled global dataset input into the supervised machine learning algorithm; supervised machine learning algorithm modifies and/or refines the classifier function based on the new device data of the global dataset; labeled global dataset is iteratively expanded and/or modified, such that the accuracy of the classifier function is iteratively improved], and [¶126, profile builder selects at least a subset of the cluster groups obtained from a device type dataset as relevant cluster groups; selection of relevant cluster groups, from the cluster groups obtained from a device type dataset, is based on the attributes that are known for the target device)],and [¶180, A classifier function is applied to the global dataset. The classifier function determines that Device Data #01 and Device Data #02 are associated with the device type, Apple iPhone 7. The classifier function determines that Device Data #03 is associated with the device type, Samsung Galaxy S6]. Regarding claim 20, Jha discloses wherein the first device attribute comprises a medical device type of the medical device [see table 1, ¶60, example of several PHSs and medical devices (see column 1 for types) and their associated threats and risks (equated to a vulnerability database]. Claims 2, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2013/0247194) issued to Jha (filed in IDS 06/29/2022) and in view of US Patent No. (US20180225592) issued to Ponnuswamy, and in view of US Patent No. (US11593700) issued to Ansari, and further in view of US Patent No. US2008/0289027) issued to Yariv (filed in IDS 06/29/2022). Regarding claims 2, 12, Jha, Ponnuswamy, and Ansari do not explicitly disclose, however, Yariv discloses wherein the device attribute includes use of an unencrypted communications protocol, wherein the behavior of the medical device includes a connection to the Internet, wherein the plurality of known exploits includes connecting to the Internet while using the unencrypted communications protocol [¶59, a broad firewall rule may require that most connections be encrypted and checked for integrity, but a narrower firewall rule that takes precedence over the broad firewall rule may require only integrity checking and not encryption if the connection is being made to a particular computer behind the firewall or to a particular service transmitting or receiving the data], and [¶68, The firewall rule may also store an indicator of what range of remote addresses (the address of the sender/receiver) to which it applies, for both Internet Protocol Version 4 (IPv4) (the RA4 field) and Internet Protocol Version 6 (IPv6) (the RA6 field)], and [¶82, see FIGS 10A-B, communication network 1000 may be any suitable wired and/or wireless communication medium or media for exchanging data between two or more computers, including the Internet.]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jha, Ponnuswamy, and Ansari with the teaching of Yariv in order for establishing and/or implementing firewall rules that may employ parameters based on connection security levels for a connection between devices which may require no encryption [ Yariv, Abstract ¶59]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Beard (US2020/0201620) [0030] FIG. 1 is an example block diagram showing a SBOM validation system 100 as per an aspect of the disclosure. The SBOM validation system 100 may comprise a plurality of medical devices (110, 111 . . . 119). The plurality of medical devices (110, 111 . . . 119) may be in communication with computer network 130. The SBOM validation system 10
Read full office action

Prosecution Timeline

Jun 29, 2022
Application Filed
Jan 25, 2023
Non-Final Rejection — §103
May 30, 2023
Response Filed
Jun 07, 2023
Final Rejection — §103
Aug 14, 2023
Response after Non-Final Action
Sep 06, 2023
Applicant Interview (Telephonic)
Sep 06, 2023
Response after Non-Final Action
Sep 13, 2023
Request for Continued Examination
Oct 03, 2023
Response after Non-Final Action
Oct 30, 2023
Non-Final Rejection — §103
Feb 05, 2024
Response Filed
Apr 25, 2024
Final Rejection — §103
Aug 30, 2024
Request for Continued Examination
Sep 03, 2024
Response after Non-Final Action
Sep 29, 2024
Non-Final Rejection — §103
Feb 28, 2025
Response Filed
May 11, 2025
Final Rejection — §103
Aug 25, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Nov 25, 2025
Non-Final Rejection — §103
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587392
SECURE COMMUNICATION METHOD AND APPARATUS IN PASSIVE OPTICAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12549527
MULTI-FACTOR AUTHENTICATION OF CLOUD-MANAGED SERVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547755
TECHNIQUES FOR SECURELY EXECUTING ATTESTED CODE IN A COLLABORATIVE ENVIRONMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12543044
SYSTEMS AND METHODS OF AUTOMATIC OUT-OF-BAND (OOB) RESTRICTED CELLULAR CONNECTIVITY FOR SET UP PROVISIONING OF MANAGED CLIENT INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Patent 12511435
DEVICE AND METHOD FOR ENFORCING A DATA POLICY
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
79%
Grant Probability
87%
With Interview (+7.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month