Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the claims filed on 08/15/2025.
Claims 14-19 are presented for examination. The claims 1-13 are withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 14, 15, 17, 18, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Valys et al. (Pub. No. US20190038148-hereinafter, Valys) in view of Nosrati et al (Patent. No. US 8433399 -hereinafter, Nosrati) and further in view of Sadhvani et al. (PUB. No. US 20210327572 -hereinafter, Sadhvani) and further in view of GIL DA COSTA et al. (Pub. No. US 20170215757 -hereinafter, GIL DA COSTA).
Regarding claim 14, Valys teaches a computer-implemented method for executing a software platform, wherein the method comprises: receiving, at a mobile computing device associated with a user, low fidelity data (Valys, [Par.0083, 0092], “[0083], In some embodiments, system 700 may be provided on the mobile devices alone, in combination with other mobile devices, or in combination with other computing systems via communication through a network through which these devices may communicate. For example and not by way of limitation, system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch. The watch may have user input generator 706 and communicate with other computing devices (e.g. mobile phone, tablet, lap top computer or desk top computer) via direct communication, wireless communication (e.g., WiFi, sound, Bluetooth etc) or through a network (e.g., internet, intranet, extranet etc.) or a combination thereof, where trained machine learning model 702 and health detector 704 may be located on the other computing devices.[0092], which may be in some embodiments the mobile or wearable computing system used to collect the user's heart rate data (or other health-indicator data), and in step 916 the low-fidelity health-indicator data sequence (heart rate data in this example) is labeled with the diagnosis. In step 918, the labeled user's low-fidelity data sequence is used to train a high-fidelity machine learning model, and optionally other-factor data sequence is also provided to train the model.” Examiner’s note, the health indicator data is considered as the low fidelity data is received at the mobile computing system (mobile phone, tablet, lap top computer) to train the machine learning model.),
wherein the data is received via a local network connection (Valys, [Par.0091], “Step 908 determines if an anomaly is present or not. As discussed this may be determined if the loss exceeds a threshold. As previously described, the threshold is set by choice of the designer and based on the purpose of the system being designed. In some embodiments the threshold may be modified by the user, but preferably not so in this embodiment. If an anomaly is not present, the process is repeated at step 904. If an anomaly is present, step 910 notifies or alerts the user to obtain a high-fidelity measurement, an ECG or blood pressure measurement for example and not by way of limitation. In step 912, the high-fidelity data is analyzed by an algorithm, a health professional or both and is described as normal or not normal, and if not normal some diagnosis may be assigned, e.g., AFib, tachycardia, bradycardia, atrial flutter, or high/low blood pressure depending on the high-fidelity measurement obtained. It is noted for clarity, that notification to record high-fidelity data is equally applicable and possible in other embodiments, and in particular embodiments using general models described above. The high-fidelity measurement, in some embodiments, may be obtained directly by the user using a mobile monitoring system, such as ECG or blood pressure systems, which may be associated with the wearable device in some embodiments. Alternatively, the notification step 910 causes automatic acquisition of the high-fidelity measurement. For example, the wearable device may communicate with a sensor (hard-wired or via wireless communication) and obtain ECG data, or it may communicate with a blood pressure cuff-system (e.g., wrist band of a wearable or an armband cuff) to automatically obtain a blood pressure measurement, or it may communicate with an implanted device such as a pace maker or ECG electrodes. Systems for remotely obtaining an ECG are provided, for example, by AliveCor, Inc., such systems include (without limitation) one or more sensors contacting the user in two or more locations, where the sensor collects electrical cardiac data that is transmitted, either wired or wirelessly, to a mobile computing device, where an app generates an ECG strip from the data, which can be analyzed by algorithms, a medical professional or both. Alternatively, the sensor may be a blood pressure monitor, where the blood pressure data are transmitted, either wired or wirelessly, to the mobile computing device.);
using a wide area network, transmitting the data to a remote computing device (Valys, [Par.0083], “input generator 706 may also collect data to determine/calculate other-factor data. Input generator, for example and not by way of limitation, may include a smart watch, wearable or mobile device (e.g., Apple Watch® or FitBit® smart phone, tablet or laptop computer), a combination of smart watch and mobile device, a surgically implanted device with the ability to transmit data to a mobile device or other portable computing device, or a device on a cart in a medical care facility. Preferably user input generator 706 has a sensor (e.g., PPG sensor, electrode sensor) to measure data related to one or more health-indicators. The smart watch, tablet, mobile phone or laptop computer of some embodiments may carry the sensor or the sensor may be remotely placed (surgically embedded, contacted to the body remote from the mobile device, or some separate device) where, in all these cases, the mobile device communicates with the sensor in order to gather health-indicator data. In some embodiments, system 700 may be provided on the mobile devices alone, in combination with other mobile devices, or in combination with other computing systems via communication through a network through which these devices may communicate. For example and not by way of limitation, system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch. The watch may have user input generator 706 and communicate with other computing devices (e.g. mobile phone, tablet, lap top computer or desk top computer) via direct communication, wireless communication (e.g., WiFi, sound, Bluetooth etc) or through a network (e.g., internet, intranet, extranet etc.) or a combination thereof, where trained machine learning model 702 and health detector 704 may be located on the other computing devices.” And [Par.0091], “Alternatively, the notification step 910 causes automatic acquisition of the high-fidelity measurement. For example, the wearable device may communicate with a sensor (hard-wired or via wireless communication) and obtain ECG data, or it may communicate with a blood pressure cuff-system (e.g., wrist band of a wearable or an armband cuff) to automatically obtain a blood pressure measurement, or it may communicate with an implanted device such as a pace maker or ECG electrodes. Systems for remotely obtaining an ECG are provided, for example, by AliveCor, Inc., such systems include (without limitation) one or more sensors contacting the user in two or more locations, where the sensor collects electrical cardiac data that is transmitted, either wired or wirelessly, to a mobile computing device, where an app generates an ECG strip from the data, which can be analyzed by algorithms, a medical professional or both. “ Examiner’s note, sensor data is transmitted to either wired or wirelessly, to a mobile computing device (remote computing system) .);
training, at the remote computing device, a machine learning model to produce high fidelity data based on the low fidelity data (Valys [Par.0024-0025], “In some embodiments, measured health-indicator data alone or in combination with other-factor data is input into a trained machine learning model that determines a probability the user's measured health-indicator is considered within a healthy range, and if not to notify the user of such. The user not being in a healthy range may increase the likelihood the user may be experiencing a health event warranting high-fidelity information to confirm a diagnosis, such as an arrhythmia which may be symptomatic or asymptomatic... s.[0025] In further embodiments, a diagnosis is used to label a low-fidelity data sequence (e.g., heart rate or PPG), which may include the other-factor data sequence. This high-fidelity diagnosis-labeled low-fidelity data sequence is used to train a high-fidelity machine learning model. In these further embodiments, the training of the high-fidelity machine learning model may be trained by unsupervised learning or may be updated from time to time with new training examples. In some embodiments, a user's measured low-fidelity health-indicator data sequence and optionally a corresponding (in time) data sequence of other-factors are input into the trained high-fidelity machine learning models to determine a probability and/or prediction the user is experiencing or experienced the diagnosed condition on which the high-fidelity machine learning model was trained...” [Par.0083], “Input generator 706 may also collect data to determine/calculate other-factor data. Input generator, for example and not by way of limitation, may include a smart watch, wearable or mobile device (e.g., Apple Watch® or FitBit® smart phone, tablet or laptop computer), a combination of smart watch and mobile device, a surgically implanted device with the ability to transmit data to a mobile device or other portable computing device, or a device on a cart in a medical care facility. Preferably user input generator 706 has a sensor (e.g., PPG sensor, electrode sensor) to measure data related to one or more health-indicators. The smart watch, tablet, mobile phone or laptop computer of some embodiments may carry the sensor or the sensor may be remotely placed (surgically embedded, contacted to the body remote from the mobile device, or some separate device) where, in all these cases, the mobile device communicates with the sensor in order to gather health-indicator data. In some embodiments... For example and not by way of limitation, system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch. The watch may have user input generator 706 and communicate with other computing devices (e.g. mobile phone, tablet, lap top computer or desk top computer) via direct communication, wireless communication (e.g., WiFi, sound, Bluetooth etc) or through a network (e.g., internet, intranet, extranet etc.) or a combination thereof, where trained machine learning model 702 and health detector 704 may be located on the other computing devices.” And [0092], “which may be in some embodiments the mobile or wearable computing system used to collect the user's heart rate data (or other health-indicator data), and in step 916 the low-fidelity health-indicator data sequence (heart rate data in this example) is labeled with the diagnosis. In step 918, the labeled user's low-fidelity data sequence is used to train a high-fidelity machine learning model....” Examiner’s note, the low-fidelity (health indicator data) data is trained by high fidelity machine learning model to determine a prediction of a high-fidelity outcome, therefore, the high-fidelity outcome is considered as the high-fidelity data. The low-fidelity/ health indicator data is received from the sensor by the computing device (remote computing device), where the machine learning model is located.),
wherein the high fidelity data is associated with a function to perform via the mobile computing device (Valys [Par.0005], “A mobile sensor platform (for example: a mobile blood pressure cuff; mobile heart rate monitor; or mobile ECG device) may be capable of monitoring the health-indicator (e.g., heart rate) continuously, e.g., producing a measurement every second or every 5 seconds, while simultaneously also acquiring other data about the user such as and without limitation: activity level, body position, and environmental parameters like air temperature, barometric pressure, location, etc. In a 24-hour period, this may result in many thousands of independent health-indicator measurements.” And [Par.0024-0025], “In some embodiments, measured health-indicator data alone or in combination with other-factor data is input into a trained machine learning model that determines a probability the user's measured health-indicator is considered within a healthy range, and if not to notify the user of such. The user not being in a healthy range may increase the likelihood the user may be experiencing a health event warranting high-fidelity information to confirm a diagnosis, such as an arrhythmia which may be symptomatic or asymptomatic... s.[0025] In further embodiments, a diagnosis is used to label a low-fidelity data sequence (e.g., heart rate or PPG), which may include the other-factor data sequence. This high-fidelity diagnosis-labeled low-fidelity data sequence is used to train a high-fidelity machine learning model. In these further embodiments, the training of the high-fidelity machine learning model may be trained by unsupervised learning or may be updated from time to time with new training examples. In some embodiments, a user's measured low-fidelity health-indicator data sequence and optionally a corresponding (in time) data sequence of other-factors are input into the trained high-fidelity machine learning models to determine a probability and/or prediction the user is experiencing or experienced the diagnosed condition on which the high-fidelity machine learning model was trained...” Examiner’s note, the high fidelity is outputted by the high fidelity machine learning model is trained on the input sensor data (health indicator data), wherein, the sensor data is continuously measured by mobile sensor platform (Ex: a mobile blood pressure cuff; mobile heart rate monitor; or mobile ECG device) every 5 second or in the 24 hour period, therefore, the mobile senor platform is considered as the mobile computing device. However, the claim does not define what is function, therefore, based on the broadest reasonable interpretation, the data measurement in every 5 second or in specific time range is considered as the function is performed at mobile computing device associated with high fidelity data.) ;
However, Valys does not teach transmitting, to the mobile computing device, the high fidelity data to be used by the mobile computing device to perform the function, data from a microelectrode array of a brain-computer interface, and executing a closed-loop feedback system by receiving feedback pertaining to execution of the function at the mobile computing device, and to further train the machine learning model, transmitting the feedback to the remote computing device.
On the other hand, Nosrati teaches transmitting, to the mobile computing device, the high fidelity data to be used by the mobile computing device to perform the function (Nosrati , [Abstract], “The wireless computing device can include but is not limited to a mobile phone, Tablet-PC or a laptop computer. ECG monitor contains a processor that continuously processes received ECG signals, stores the signals in memory and performs a series of analysis on the recorded data using pre-stored software algorithms. When an abnormality is detected, a wireless transceiver transmits the processed ECG data to a wireless computing device for viewing and further analysis, by displaying the received ECG data for doctor's viewing, sending the data to a web-based server computer for remote access, performing additional advanced analysis on the data and downloading new algorithms and instructions into the ECG monitoring device via telemetry.” Examiner’s note, the ECG data is transmitted to mobile computing device for further analysis, such as displaying the received ECG data for doctor's viewing, sending the data to a web-based server computer for remote access, performing additional advanced analysis on the data and downloading new algorithms, are considered as the function). ;
Valys and Nosrati are analogous in arts because they have the same field of endeavor of generating the user’s health data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the training, at the remote computing device, a machine learning model to produce high fidelity data based on the low fidelity data, wherein the high fidelity data is associated with a function to perform via the mobile computing device, as taught by Valys, to include t transmitting, to the mobile computing device, the high fidelity data to be used by the mobile computing device to perform the function, as taught by Nosrati. The modification would have been obvious because one of the ordinary skills in art would be motivated to further review and analysis (Nosrati , [Col.3, lines 5-17], “What is needed is an ECG device that has the capability to record the patient cardiovascular activity over extended time period such as a 24-hour period or longer in conjunction with the ability to transmit the recorded data automatically or on-demand to an outside wireless computing device. Furthermore, there is a significant need for a wireless ECG monitoring device that is capable of analyzing and scrutinizing the patient's cardiovascular data for arrhythmia and other abnormal heart conditions. Also, in the event that abnormal activity or activities are detected, there is a significant need for an ECG monitor that can transmit recent history packets of the patient's cardiovascular activity prior to and including each abnormal event to a wireless computing device for doctor's viewing and further analysis.”)
However, Valys, Nosrati do not teach data from a microelectrode array of a brain-computer interface, and executing a closed-loop feedback system by receiving feedback pertaining to execution of the function at the mobile computing device, and to further train the machine learning model, transmitting the feedback to the remote computing device.
On the other hand, Sadhvani teaches and executing a closed-loop feedback system by receiving feedback pertaining to execution of the function at the mobile computing device, and to further train the machine learning model (Sadhvani, [Par.0021-0022], “The contextual intelligence system 110 may manage and/or update the contextual intelligent processes based on the user information from the user device 104 and/or the event information from the event provider system 118. For example, the contextual intelligence system 110 may determine one or more contextual intelligent processes based on the user information and/or the event information. Contextual intelligent processes may include a sequence of actions, rules, steps, and/or other parameters for responding to the user information. For example, if the user is diabetic and the user information indicates the user's insulin level is outside of normal ranges, the contextual intelligence system 110 may determine one or more contextual intelligent processes to assist the diabetic user. The sequence of actions, rules, steps, and/or other parameters for the contextual intelligent processes may invoke and/or trigger one or more services (e.g., API calls) hosted by one or more health care provider (HCP) computing platforms (e.g., platforms within the health care provider (HCP) system 116). For instance, the HCP computing platform 116 may be a computing platform associated with a call-center or a computing platform associated with a prescription provider system that provides medical prescriptions for the user. Based on the user information and/or event information, the contextual intelligence system 110 may use artificial intelligence algorithms (e.g., machine learning algorithms and/or deep learning algorithms such as neural networks) to update the contextual intelligent processes in real-time and provide the updated contextual intelligent processes to the health care provider system 116. The contextual intelligence system 110 may store the updated contextual intelligent processes in the data repository 114. [0022] The authentication and feedback (AF) system 112 may receive information used to update the artificial intelligence algorithms (e.g., datasets). For example, the AF system 112 may use information such as outputs (e.g., services/API calls) from the updated contextual intelligent processes to update the artificial intelligence datasets. In other words, after the contextual intelligence system 110 updates the contextual intelligent processes, the AF system 112 may determine the services from the contextual intelligent processes. The AF system 112 may compare the outputs (e.g., triggered services) with expected or predicted values (e.g., expected or predicted services). Based on the comparison, the AF system 112 may update the artificial intelligence datasets. For example, the artificial intelligence datasets may be neural networks with weights for each of the different levels (e.g., layers).” Examiner’s note, the enterprise computing system receives an additional information from the user’s device (mobile computing device) is considered as the feedback that is use to update/retrain the machine learning algorithm.).
transmitting the feedback to the remote computing device (Sadhvani, [Par.0019-0022], “[0019], The enterprise computing system 108 includes one or more computing devices, computing platforms, systems, servers, and/or other apparatuses capable of performing tasks, functions, and/or other actions for the enterprise organization…[0022], The authentication and feedback (AF) system 112 may receive information used to update the artificial intelligence algorithms (e.g., datasets). For example, the AF system 112 may use information such as outputs (e.g., services/API calls) from the updated contextual intelligent processes to update the artificial intelligence datasets. In other words, after the contextual intelligence system 110 updates the contextual intelligent processes, the AF system 112 may determine the services from the contextual intelligent processes. The AF system 112 may compare the outputs (e.g., triggered services) with expected or predicted values (e.g., expected or predicted services). Based on the comparison, the AF system 112 may update the artificial intelligence datasets” Examiner’s note, the enterprise computing system receives an information from the user’s device, which is used to update/retrain the machine learning algorithm. Therefore, the information is considered as the feedback is transmitted to the enterprise computing system or remote computer device.).
Valys, Nosrati and Sadhvani are analogous in arts because they have the same field of endeavor of generating the user’s health data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the transmitting, to the mobile computing device, the high fidelity data to be used by the mobile computing device to perform the function, as taught by Valys, to include the and executing a closed-loop feedback system by receiving feedback pertaining to execution of the function at the mobile computing device, and to further train the machine learning model, transmitting the feedback to the remote computing device, as taught by Sadhvani. The modification would have been obvious because one of the ordinary skills in art would be motivated to update the contextual intelligent processes in real time, (Sadvani, [Par.0021], “Based on the user information and/or event information, the contextual intelligence system 110 may use artificial intelligence algorithms (e.g., machine learning algorithms and/or deep learning algorithms such as neural networks) to update the contextual intelligent processes in real-time and provide the updated contextual intelligent processes to the health care provider system 116. The contextual intelligence system 110 may store the updated contextual intelligent processes in the data repository 114.”).
However, Valys, Nosrati and Sadhvani do not teach low fidelity data from a microelectrode array of a brain-computer interface,
On the other hand, Gildacosta teaches receiving low fidelity data from a microelectrode array of a brain-computer interface (GIL DA COSTA, [Par.0004-0006], “One exemplary type of physiological signal that can be used in sensory and/or cognitive analysis is electroencephalography (EEG). EEG is the recording of electrical activity exhibited by the brain using electrodes positioned on a subject's head, forming a spectral content of neural signal oscillations that comprise an EEG data set. For example, the electrical activity of the brain that is detected by EEG techniques can include voltage fluctuations that may result from ionic current flows within the neurons of the brain. In some contexts, EEG refers to the recording of the brain's spontaneous electrical activity over specific periods of time. EEG can be used in clinical diagnostic applications including epilepsy, coma, encephalopathies, brain death, and other diseases and defects, as well as in studies of sleep and sleep disorders. In some instances, EEG has been used for the diagnosis of tumors, stroke and other focal brain disorders. [0005] One example of an EEG technique includes recording of event-related potentials (ERPs), which refer to EEG recorded brain responses that are correlated with a given event (e.g., simple stimulation and complex processes). For example, an ERP includes an electrical brain response—a brain wave—related to sensory, motor, and/or cognitive processing. ERPs can be associated with brain measures of perception (e.g., visual, auditory, etc.) and cognition (e.g., attention, language, decision making, etc.)…[0006], The device also includes a data processing unit defined within the casing unit and in communication with the data acquisition unit. The data processing unit is configured to include a signal processing circuit to amplify and digitize the detected electrophysiological signals as data, a processor to process the data, a memory to store the data, and a transmitter to transmit the data to a remote computer system.” And [Par. 0020], ... The disclosed devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead in tandem with a powered circuit board that improves the physiological signal detection and processing to produce various user assessments such as psychological states and/or behavioral preferences, among other various other types of cognitive and/or sensory processes evaluation, as well as brain-computer interface operations.” Examiner’s note, devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead to collect the electrophysiological signals and transmitting to the remote computer system, wherein, the devices, systems including the brain-computer interface operations.)
Valys, Nosrati, Sadhvani and GIL DA COSTA are analogous in arts because they have the same field of endeavor of generating the user’s health data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the receiving, at a mobile computing device associated with a user, low fidelity data, as taught by Valys, to include the data from a microelectrode array of a brain-computer interface, as taught by GIL DA COSTA. The modification would have been obvious because one of the ordinary skills in art would be motivated to detect the physiological signal/data (GIL DA COSTA2, [Par.020], “The disclosed devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead in tandem with a powered circuit board that improves the physiological signal detection and processing to produce various user assessments such as psychological states and/or behavioral preferences, among other various other types of cognitive and/or sensory processes evaluation, as well as brain-computer interface operations. Devices of the disclosed technology have been designed and may be configured in both portable and non-portable design forms that provide easy and user-friendly operation and comfort-of-use. Systems of the disclosed technology have been designed and implemented using firmware and/or software to allow a user or operator to easily interface with the functional units of the system, allowing them to perform multiple operations, including but not limited to acquire, transmit, assess, and access interpretation of the physiological and/or behavioral data.”).
Regarding claim 15, Valys teaches the computer-implemented method of claim 14, wherein a plurality of mobile computing devices is associated with a plurality of users (Valys, [Par.0005-0006], “[0005] A mobile sensor platform (for example: a mobile blood pressure cuff; mobile heart rate monitor; or mobile ECG device) may be capable of monitoring the health-indicator (e.g., heart rate) continuously, e.g., producing a measurement every second or every 5 seconds, while simultaneously also acquiring other data about the user such as and without limitation: activity level, body position, and environmental parameters like air temperature, barometric pressure, location, etc, [0006]Devices presently used to continuously measure health-indicators of users/patients range from bulky, invasive, and inconvenient to simple wearable or handheld mobile devices.” Examiner’s note, the mobile EGG device is used to measure the user’s health indicator’s data.).
However Valys does not teach and each of the plurality of users is using a brain-computer interface, and wherein the method further comprises executing the closed-loop feedback system based on a plurality of feedback received from the plurality of mobile computing devices,
On the other hand, Sadvani teaches and wherein the method further comprises executing the closed-loop feedback system based on a plurality of feedback received from the plurality of mobile computing devices (Sadhvani, [Par.0021-0022], “The contextual intelligence system 110 may manage and/or update the contextual intelligent processes based on the user information from the user device 104 and/or the event information from the event provider system 118. For example, the contextual intelligence system 110 may determine one or more contextual intelligent processes based on the user information and/or the event information. Contextual intelligent processes may include a sequence of actions, rules, steps, and/or other parameters for responding to the user information. For example, if the user is diabetic and the user information indicates the user's insulin level is outside of normal ranges, the contextual intelligence system 110 may determine one or more contextual intelligent processes to assist the diabetic user. The sequence of actions, rules, steps, and/or other parameters for the contextual intelligent processes may invoke and/or trigger one or more services (e.g., API calls) hosted by one or more health care provider (HCP) computing platforms (e.g., platforms within the health care provider (HCP) system 116). For instance, the HCP computing platform 116 may be a computing platform associated with a call-center or a computing platform associated with a prescription provider system that provides medical prescriptions for the user. Based on the user information and/or event information, the contextual intelligence system 110 may use artificial intelligence algorithms (e.g., machine learning algorithms and/or deep learning algorithms such as neural networks) to update the contextual intelligent processes in real-time and provide the updated contextual intelligent processes to the health care provider system 116. The contextual intelligence system 110 may store the updated contextual intelligent processes in the data repository 114. [0022] The authentication and feedback (AF) system 112 may receive information used to update the artificial intelligence algorithms (e.g., datasets). For example, the AF system 112 may use information such as outputs (e.g., services/API calls) from the updated contextual intelligent processes to update the artificial intelligence datasets. In other words, after the contextual intelligence system 110 updates the contextual intelligent processes, the AF system 112 may determine the services from the contextual intelligent processes. The AF system 112 may compare the outputs (e.g., triggered services) with expected or predicted values (e.g., expected or predicted services). Based on the comparison, the AF system 112 may update the artificial intelligence datasets. For example, the artificial intelligence datasets may be neural networks with weights for each of the different levels (e.g., layers).” Examiner’s note, the enterprise computing system receives an additional information from the user’s device (mobile computing device) is considered as the feedback that is use to update/retrain the machine learning algorithm.).
Valys and Sadhvani are analogous in arts because they have the same field of endeavor of generating the user’s health data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the a plurality of mobile computing devices is associated with a plurality of users and each of the plurality of users is using a brain-computer interface, as taught by Valys, to include the executing the closed-loop feedback system based on a plurality of feedback received from the plurality of mobile computing devices, as taught by Sadhvani. The modification would have been obvious because one of the ordinary skills in art would be motivated to to update the contextual intelligent processes in real time, (Sadvani, [Par.0021], “Based on the user information and/or event information, the contextual intelligence system 110 may use artificial intelligence algorithms (e.g., machine learning algorithms and/or deep learning algorithms such as neural networks) to update the contextual intelligent processes in real-time and provide the updated contextual intelligent processes to the health care provider system 116. The contextual intelligence system 110 may store the updated contextual intelligent processes in the data repository 114.”).).
However, Valys and Sadhvani do not teach and each of the plurality of users is using a brain-computer interface
On the other hand, GIL DA COSTA teaches and each of the plurality of users is using a brain-computer interface (GIL DA COSTA, [Par.0004-0006], “One exemplary type of physiological signal that can be used in sensory and/or cognitive analysis is electroencephalography (EEG). EEG is the recording of electrical activity exhibited by the brain using electrodes positioned on a subject's head, forming a spectral content of neural signal oscillations that comprise an EEG data set. For example, the electrical activity of the brain that is detected by EEG techniques can include voltage fluctuations that may result from ionic current flows within the neurons of the brain. In some contexts, EEG refers to the recording of the brain's spontaneous electrical activity over specific periods of time. EEG can be used in clinical diagnostic applications including epilepsy, coma, encephalopathies, brain death, and other diseases and defects, as well as in studies of sleep and sleep disorders. In some instances, EEG has been used for the diagnosis of tumors, stroke and other focal brain disorders. [0005] One example of an EEG technique includes recording of event-related potentials (ERPs), which refer to EEG recorded brain responses that are correlated with a given event (e.g., simple stimulation and complex processes). For example, an ERP includes an electrical brain response—a brain wave—related to sensory, motor, and/or cognitive processing. ERPs can be associated with brain measures of perception (e.g., visual, auditory, etc.) and cognition (e.g., attention, language, decision making, etc.)…[0006], The device also includes a data processing unit defined within the casing unit and in communication with the data acquisition unit. The data processing unit is configured to include a signal processing circuit to amplify and digitize the detected electrophysiological signals as data, a processor to process the data, a memory to store the data, and a transmitter to transmit the data to a remote computer system.” And [Par. 0020], ... The disclosed devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead in tandem with a powered circuit board that improves the physiological signal detection and processing to produce various user assessments such as psychological states and/or behavioral preferences, among other various other types of cognitive and/or sensory processes evaluation, as well as brain-computer interface operations.” Examiner’s note, devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead to collect the electrophysiological signals and transmitting to the remote computer system, wherein, the devices, systems including the brain-computer interface operations.)
Valys, Sadhvani and GIL DA COSTA are analogous in arts because they have the same field of endeavor of generating the user’s health data.
Accordingly, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to modify the wherein a plurality of mobile computing devices is associated with a plurality of users, as taught by Valys, to include the each of the plurality of users is using a brain-computer interface, as taught by GIL DA COSTA. The modification would have been obvious because one of the ordinary skills in art would be motivated to detect the physiological signal/data (GIL DA COSTA2, [Par.020], “The disclosed devices, systems, and methods use an array of electrode sensors positioned in a specialized configuration about the subject's forehead in tandem with a powered circuit board that improves the physiological signal detection and processing to produce various user assessments such as psychological states and/or behavioral preferences, among other various other types of cognitive and/or sensory processes evaluation, as well as brain-computer interface operations. Devices of the disclosed technology have been designed and may be configured in both portable and non-portable design forms that provide easy and user-friendly operation and comfort-of-use. Systems of the disclosed technology have been designed and implemented using firmware and/or software to allow a user or operator to easily interface with the functional units of the system, allowing them to perform multiple operations, including but not limited to acquire, transmit, assess, and access interpretation of the physiological and/or behavioral data.”).
Regarding claim 17, Valys teaches the computer-implemented method of claim 16, further comprising training the machine learning model using the user variations (Valys [Par.0034], “input generator 706 may also collect data to determine/calculate other-factor data. Input generator, for example and not by way of limitation, may include a smart watch, wearable or mobile device (e.g., Apple Watch® or FitBit® smart phone, tablet or laptop computer), a combination of smart watch and mobile device, a surgically implanted device with the ability to transmit data to a mobile device or other portable computing device, or a device on a cart in a medical care facility. Preferably user input generator 706 has a sensor (e.g., PPG sensor, electrode sensor) to measure data related to one or more health-indicators. The smart watch, tablet, mobile phone or laptop computer of some embodiments may carry the sensor or the sensor may be remotely placed (surgically embedded, contacted to the body remote from the mobile device, or some separate device) where, in all these cases, the mobile device communicates with the sensor in order to gather health-indicator data. In some embodiments, system 700 may be provided on the mobile devices alone, in combination with other mobile devices, or in combination with other computing systems via communication through a network through which these devices may communicate. For example and not by way of limitation, system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch. The watch may have user input generator 706 and communicate with other computing devices (e.g. mobile phone, tablet, lap top computer or desk top computer) via direct communication, wireless communication (e.g., WiFi, sound, Bluetooth etc) or through a network (e.g., internet, intranet, extranet etc.) or a combination thereof, where trained machine learning model 702 and health detector 704 may be located on the other computing devices.” Examiner’s note, the machine learning model is trained on the other computing device. ).
Regarding claim 18, Valys teaches the computer-implemented method of claim 14, further comprising providing an interface to access the software platform via the wide area network (Valys, [Par.0083, 0085], “0083 The smart watch, tablet, mobile phone or laptop computer of some embodiments may carry the sensor or the sensor may be remotely placed (surgically embedded, contacted to the body remote from the mobile device, or some separate device) where, in all these cases, the mobile device communicates with the sensor in order to gather health-indicator data. In some embodiments, system 700 may be provided on the mobile devices alone, in combination with other mobile devices, or in combination with other computing systems via communication through a network through which these devices may communicate. For example and not by way of limitation, system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch. The watch may have user input generator 706 and communicate with other computing devices (e.g. mobile phone, tablet, lap top computer or desk top computer) via direct communication, wireless communication (e.g., WiFi, sound, Bluetooth etc) or through a network (e.g., internet, intranet, extranet etc.) or a combination thereof, where trained machine learning model 702 and health detector 704 may be located on the other computing devices.” And [0085], The notification, as described herein, may take many forms. In some embodiments, this information may be visualized to the user. For example and not by way of limitation the information can be displayed on a user interface such as a graph that shows (i) measured health-indicator data (e.g., heart rate) and other-factor data (e.g., step count) as a function of time, (ii) a distribution of predicted health-indicator data (e.g., predicted heart rate values) generated by the machine learning model. In this way, the user can visually compare the measured data points to the predicted data points and determine by visual inspection whether their heart rate, for example, falls into the range expected by the machine learning model.”)
Regarding claim 19, Valys teaches the computer-implemented method of claim 14, further comprising: replaying a scenario, wherein scenario comprises the high fidelity data and execution of the function at the mobile computing device (Vylas, [Par.0025], “In further embodiments, a diagnosis is used to label a low-fidelity data sequence (e.g., heart rate or PPG), which may include the other-factor data sequence. This high-fidelity diagnosis-labeled low-fidelity data sequence is used to train a high-fidelity machine learning model. In these further embodiments, the training of the high-fidelity machine learning model may be trained by unsupervised learning or may be updated from time to time with new training examples. In some embodiments, a user's measured low-fidelity health-indicator data sequence and optionally a corresponding (in time) data sequence of other-factors are input into the trained high-fidelity machine learning models to determine a probability and/or prediction the user is experiencing or experienced the diagnosed condition on which the high-fidelity machine learning model was trained. This probability may include a probability of when the event begins and when it ends. Some embodiments, for example, may calculate the atrial fibrillation (AF) burden of a user, or the amount of time a user experiences AF over time.” Examiner’s note, the high fidelity machine learning model to train on the new training data );
based on the replayed scenario, using a second machine learning model to simulate execution of a second scenario, wherein the second scenario comprises second high fidelity data and execution of a second function at the mobile computing device (Vylas, [Par.0028], “In some embodiments, the user may be prompted to obtain additional measured high-fidelity data that can be used to label previously acquired low-fidelity user health-indicator data to generate a different trained high-fidelity machine learning model that has the ability to predict or diagnose abnormalities or events using only low-fidelity health-indicator data, where such abnormalities are typically only identified or diagnosed using high-fidelity data.” Examiner’s note, the additional measured high fidelity data is trained on the different machine learning model (second machine learning model).).
Claim, 16 is rejected under 35 U.S.C. 103 as being unpatentable over Valys et al. (Pub. No. US20190038148-hereinafter, Valys) in view of Nosrati et al (Patent. No. US 8433399 -hereinafter, Nosrati) and further in view of Sadhvani et al. (PUB. No. US 20210327572 -hereinafter, Sadhvani) and further in view of GIL DA COSTA et al. (Pub. No. US 20170215757 -hereinafter, GIL DA COSTA) and further in view of Bar-Yam et al. (Pub. No. US 20160055190 -hereinafter, Bar-Yam).
Regarding claim 16, Valys in view of Sadvani teaches the computer-implemented method of claim 15, further comprising to identify user variations between a first element of the plurality of feedback and a second element of the plurality of feedback (Sadhvani [Par.0022], “The authentication and feedback (AF) system 112 may receive information used to update the artificial intelligence algorithms (e.g., datasets). For example, the AF system 112 may use information such as outputs (e.g., services/API calls) from the updated contextual intelligent processes to update the artificial intelligence datasets. In other words, after the contextual intelligence system 110 updates the contextual intelligent processes, the AF system 112 may determine the services from the contextual intelligent processes. The AF system 112 may compare the outputs (e.g., triggered services) with expected or predicted values (e.g., expected or predicted services). Based on the comparison, the AF system 112 may update the artificial intelligence datasets. For example, the artificial intelligence datasets may be neural networks with weights for each of the different levels (e.g., layers). The AF system 112 may use loss functions to update the weights of the neutral networks ba