Notice of Pre-AIA or AIA Status
DETAILED CORRESPONDENCE
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action on merits is in response to the communication received on 26 December 2025. Amendments to claim 1 are acknowledged and have been carefully considered. Claim 3 is cancelled. Claims 1, 2, and 4-9 are pending and considered below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, and 4-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Badik et al. (20230355478) in view of Cohan et al. (20230157550) and in further view of Khachaturian et al. (20160000327).
Claim 1: Badik discloses a drug dispenser ([137 “drug dispenser” as used herein refers to a device which releases medication at specified times,”]) capable of measuring non-contact biometric information ([150 “collect or harvest acoustical data for biometric analysis (by a processor) or for digital or analog voice communications. A “sensor” can include any one or more of a physiological sensors (e.g., blood pressure, heartbeat, etc.), a biometric sensor (e.g., a heart signature, a fingerprint, etc.), an environmental sensor (e.g., temperature, particles, chemistry, etc.), a neurological sensor (e.g., brainwaves, electroencephalogram (EEG) etc.), or an acoustic sensor,”]), the drug dispenser comprising:
a drug supplier that puts in drugs and discharges stored drugs ([199, 200, 201 “drug dispenser 202 can comprise a processor 210, memory 212, an outlet 280 and a communication module 230 and may execute software to perform the functions described here. As illustrated here, the drug dispenser 202 may also include a drug storage comprising a number of removable vessels or medication cartridges. As will be seen, these vessels or medication cartridges may be pre-loaded by a pharmacy with prescription medication for the user of the drug dispenser,” 211]);
a vision sensor that takes an image of a person who takes drugs ([211 “user interface 204 may comprise of, or include, an authentication sensor, such as a fingerprint scanner, a fingerprint, an eyeblink scanner, a retina scanner, an iris scanner, an eye scanner, and a facial image scanner to read biometric information of a user of the drug dispenser to be used to authenticate the user before dispensing or loading of medication from or into the drug dispenser,”]);
a controller comprising a user identification module that recognizes and identifies a user on the basis of an image taken by the vision sensor ([227 “patient is asked to provide select details related to at least one of the patient's personal information, as well as a unique identity and at least one biometric marker associated with the user such as a fingerprint, a facial image, a retinal image, or an iris image, for example,”]);
a communication unit configured to transmit the information measured by the controller to the outside ([202 “drug dispenser 202 comprises the user interface 204, input interface 205, the output interface 208, processor 210, memory 212, and the communication module 230 embedded on to the drug dispenser,” 211-213, Fig. 2A]).
Badik does not explicitly disclose however Cohan discloses:
a medication management module that confirms whether a user took drugs and manages a medication state of the user on the basis of the image taken by the vision sensor ([31 “issue of contactless monitoring vital signs in real-time. In an aspect, the methods and systems can monitor vital signs from multiple individuals simultaneously and in real-time. This is possible by using optimized software for streaming high-resolution frames of the video feed(s) directly into a computing system that can analyze important regions of each frame to compute metrics quickly and then move to a next frame,” 35 “plurality of camera devices 120 can send the video signals to the biometric monitoring subsystem 130 as the video signal is generated. The biometric monitoring subsystem 130 can generate values of respective vital signs in nearly real-time, for the multiple subjects simultaneously. Those values are generated using the video signal as the video signal is received at the biometric monitoring subsystem,” 48 “vital sign corresponding to an oscillatory physiological quantity (e.g., the pumping of blood by a heart or the intake of air at a lung) a value of the vital sign can be obtained by transforming the time series from time domain to frequency domain. By updating the time series to include new frames received as the video signal is generated by the camera devices 120, the value of the vital sign can be updated at the rate in which the time series is updated,” 73 “send an instruction to the automation control subsystem 410 to notify one or many of the devices 420 that a pain medication may need to be dispensed, e.g., distributed based on discomfort levels,”]);
Badik does not explicitly disclose, however Khachaturian further discloses:
a vital sign detection module configured to analyze temporal variation of pixel intensity data corresponding to blood flow from a predefined plurality of skin regions of a user's face ([167 “includes a blood-flow-analyzer module 1202 that analyzes a temporal variation to generate a pattern of flow of blood 1204. One example of the temporal variation is temporal variation 1122 in FIG. 11. In some implementations, the pattern flow of blood 1204 is generated from motion changes in the pixels and the temporal variation of color changes in the skin of the images 704. In some implementations, apparatus 1200 includes a blood-flow display module 1206 that displays the pattern of flow of blood 1204 for review by a healthcare worker,” 198, 227 “visualize flow of blood filling a face in the video and also amplify and reveal small motions, and other vital signs such as blood pressure, respiration, EKG and pulse. Method 1900 can execute in real time to show phenomena occurring at temporal frequencies selected by the operator. A combination of spatial and temporal processing of videos can amplify subtle variations that reveal important aspects of the world. Method 1900 considers a time series of color values at any spatial location (e.g., a pixel) and amplifies variation in a given temporal frequency band of interest. For example, method 1900 selects and then amplifies a band of temporal frequencies including plausible human heart rates. The amplification reveals the variation of redness as blood flows through the face,” 228 “employs localized spatial pooling and bandpass filtering to extract and reveal visually the signal corresponding to the pulse. The domain analysis allows amplification and visualization of the pulse signal at each location on the face. Asymmetry in facial blood flow can be a symptom of arterial problems,”]), suppress motion-induced noise components derived from facial movement components ([155 “temporal bandpass filtering that analyzes frequencies over time. In some implementations, the signal processing performed by signal-processor 1008 is spatial processing that removes noise. Apparatus 1000 amplifies only small temporal variations in the signal-processing module,” 188 “apparatus 1500 includes a signal-processing module 1510 that applies signal processing to the pixel value temporal variations 1508, generating an amplified temporal variation….signal processing performed by signal-processing module 1510 is spatial processing that removes noise,”]), and detects at least one of heart rate, blood pressure, and oxygen saturation ([227 ““visualize flow of blood filling a face in the video and also amplify and reveal small motions, and other vital signs such as blood pressure, respiration, EKG and pulse.,”]) as a vital sign based on he analyzed temporal variation of the pixel intensity data ([114 “apparatus 700 includes a regional facial clusterial module 708 that applies spatial clustering to the output of the frequency filter 706. The regional facial clusterial module 708 performs block 1606 in FIG. 16. In some implementations the regional facial clusterial module 708 includes fuzzy clustering, k-means clustering, expectation-maximization process, Ward's apparatus or seed point based clustering,” 115 “skin-pixel-identifier 702, the frequency filter 706, the regional facial clusterial module 708 and the frequency-filter 710 amplify temporal variations (as a temporal-variation-amplifier) in the two or more images,” 230 “spatial and temporal processing to emphasize subtle temporal changes in a video. Method 1900 decomposes the video sequence into different spatial frequency bands. These bands might be magnified differently because (a) the bands might exhibit different signal-to-noise ratios or (b) the bands might contain spatial frequencies for which the linear approximation used in motion magnification does not hold,” 231 “temporal processing on each spatial band. Method 1900 considers the time series corresponding to the value of a pixel in a frequency band and applies a bandpass filter to extract the frequency bands of interest. As one example, method 1900 may select frequencies within the range of 0.4-4 Hz, corresponding to 24-240 beats per minute, if the operator wants to magnify a pulse. If method 1900 extracts the pulse rate, then method 1900 can employ a narrow frequency band around that value,”]).
Therefore it would be obvious for Badik to a vital sign detection module configured to analyze temporal variation of pixel intensity data corresponding to blood flow from a predefined plurality of skin regions of a user's face, suppress motion-induced noise components derived from facial movement components, and detects at least one of heart rate, blood pressure, and oxygen saturation, and as a vital sign based on he analyzed temporal variation of the pixel intensity data as per the steps of Khachaturian in order to detect and analyze facial features as scanned by the drug dispenser and assign a drug prescription accordingly to result in the provision of helpful drugs to assist in the treatment of individuals to increase levels of health.
Claim 2: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 1 above and Badik further discloses wherein the communication unit receives prescription information of the user from the outside and the drug supplier discharges drugs on the basis of the prescription information ([193 “receive a prescription; dispense a drug through the drug dispenser; update an inventory of the drug through the communication module; maintain a log of record of dispensing of the drug; maintain a ledger of the record of dispensing of the drug, using blockchain technology,” 194]).
Claim 4: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 3 above and Badik further discloses wherein the medication management module recognizes drug information through the prescription information and checks whether drugs coincide with the prescription information when the drugs are put in and discharged on the basis of drug images taken by the vision sensor ([48 “authenticating the user by matching credentials of the biometric information against the record of practitioners and the record of patients; receiving the prescription for dispensing from the user through the system; checking and authenticating the dispensing of the drug with the inventory of the drug according to the user information; and dispensing the drug from a drug storage through the drug dispenser,” 239 “In case of a wrong drug container dispensed, the camera captures the image of the drug as well as the user, and the processor 310 is configured to raise an alarm to notify the practitioners and the authorized owners of the system for a misuse or illegal access of the drug container,” 241, 245, 250 “name of manufacturer of medication, image of the medication, dosage instructions, administration instructions, e.g., dosage, administration time, storage instructions, expiration date, remaining refills, interaction data, special instructions,”]).
Claim 5: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 3 above and Badik does not explicitly disclose, however Cohan discloses wherein the controller further includes a stress estimation module that classifies facial expressions of a user through a learning model learning a classifier for facial expressions through a neural network and estimates a stress index by calculating collection information of facial expressions classified in a reference time period ([32 “facial expressions and/or mood; discomfort or pain from facial expressions; seizures; facial swelling; or similar. In some situations, the monitoring of one or many of such bodily functions can be accomplished by analyzing facial features and other qualitative visual cues,” 74, 75 “the biometric monitoring subsystem 130 can determine vital signs and/or bodily functions indicative of a stress level exceeding a threshold or an otherwise undesirable mood and physical condition (e.g., sadness and fatigue),” 91 “monitoring average pixel values on multiple regions of the face in a visible video feed and combining those in a neural network. In some cases, a model defining such a neural network can be retained in the analysis libraries,”]).
Therefore it would be obvious for Badik wherein the controller further includes a stress estimation module that classifies facial expressions of a user through a learning model learning a classifier for facial expressions through a neural network and estimates a stress index by calculating collection information of facial expressions classified in a reference time period as per the steps of Cohan in order to detect and analyze facial features as scanned by the drug dispenser and assign a drug prescription accordingly to result in the provision of helpful drugs to assist in the treatment of individuals to increase levels of health.
Claim 6: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 3 above and Badik further discloses wherein the controller further includes a medication compliance prediction module that predicts and calculates medication compliance on the basis of any one item of information of a medication time, a number of times of medication, and variation of vital signs before and after medication ([198 “contact a treating health organization through the communication module 130 to relay medication compliance information and other related information for the patient's health records,” 238 “patient Z, the respiratory SpO2 value is 97 out of 100. After receiving the value of the vital of the patient Z from the biofeedback monitoring device 309, the processor compares the received value with the stored value that is 97. If for the value 97 the prescribed amount is 10 mg in a dosage, the processor adjusts the quantity of the drug to 7 mg if the received value is 93,” 250 “data regarding medication interactions, compliance information, etc. In one embodiment, one database comprises all of the data and information for the patient, the medications, compliance, prescribing physician/practitioner, etc. In one embodiment, the database can comprise at least a list of the drug the patient is receiving along with the dosage of that medication. Using the smart dispensing system, patient compliance can be monitored to assure the patient is adhering to the proper drug frequency, and this monitoring information can be stored in the database,”]).
Claim 7: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 6 above and Badik further discloses wherein the controller transmits the predicted medication compliance information to a management server and the management server performs incentive processing for the user on the basis of the medication compliance information ([207 “application loaded on the healthcare provider's or the pharmacist's smartphone, via a text message or email, or through other messages. Another feature that can be incorporated in the drug dispenser 202 to encourage adherence are game-like software features that track and score patient's adherence performance over time and offer feedback and other benefits and awards for top performers,” 208, 209]).
Claim 8: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 7 above and Badik further discloses wherein the incentive processing includes at least one of a medical expense discount, a premium discount, and a medication management fee discount ([207 “application loaded on the healthcare provider's or the pharmacist's smartphone, via a text message or email, or through other messages. Another feature that can be incorporated in the drug dispenser 202 to encourage adherence are game-like software features that track and score patient's adherence performance over time and offer feedback and other benefits and awards for top performers,” 208, 209]).
Claim 9: Badik in view of Cohan and Khachaturian disclose the drug dispenser as per Claim 3 above and Badik does not explicitly disclose, however Cohan discloses wherein the vital sign detection module measures vital signs of a user by inputting movement component data of the face and pixel variation component data of a preset plurality of measurement spot regions of the skin region, as input data, into a learning model ([9 “systems and methods also can include machine-learning (ML) techniques that can enable automated recognition of subtle or complex manifestations of illness/injury. As such, embodiments that include ML techniques can generate data that can facilitate implementations of ML diagnostic tools,” 52 “biometric monitoring subsystem 130 can determine an image value representative of an average pixel intensity within the region. In some cases, the pixels included in the determination of the image value are those pixels having an intensity (e.g., brightness value) that exceeds a threshold. The biometric monitoring subsystem 130 can map the image value to a temperature value of the skin temperature or the body temperature, or both,” 53 “biometric monitoring subsystem 130 can identify a first facial feature (e.g., forehead) using a visible video signal and also can identify a second facial feature (e.g., nose) using in the infrared video signal. In one example, the biometric monitoring subsystem 130 can apply a feature tracking technique on the visible video signal to identify the first facial feature—e.g., first facial feature can be forehead and the biometric monitoring subsystem 130 can identify an upper portion of a tracked face as the first facial feature,” 54, 55, 59]).
Therefore it would be obvious for Badik wherein the vital sign detection module measures vital signs of a user by inputting movement component data of the face and pixel variation component data of a preset plurality of measurement spot regions of the skin region, as input data, into a learning model as per the steps of Cohan in order to detect and analyze facial features as scanned by the drug dispenser and assign a drug prescription accordingly to result in the provision of helpful drugs to assist in the treatment of individuals to increase levels of health.
Response to Arguments
Applicants arguments and amendments, see Remarks/Amendments submitted 26 December 2025 with respect to the rejection of claims 1, 2, and 4-9 have been carefully considered and are addressed below.
Claim Interpretation
Examiner previously issued a notice of claim interpretation asper the requirements of 35 U.S.C. 112(f) and Pre AIA 35 U.S.C. 112, sixth paragraph and Applicants did not address the issue so the interpretation of all pending claims under the statute is maintained.
Claim Rejections - 35 USC § 101
Applicants amended the independent claim with respect to particular recitations of technically related processes directed to the detection of temporal variations of pixel intensity and the determination of blood flow measurements by collecting and processing of data related to predefined facial skin regions. As well the detection of noise components with respect to detected facial movements are recited. Examiner’s conclusion is guided by the implemented claim amendments as well as the specific detailing of processes as recited in the written description as recited at least at paragraphs [80]-[85] which specifically detail the processing of collected data and the implementation of the detailed procedure. Therefore the rejection of all pending claims under 35 USC 101 is removed.
Claim Rejections - 35 USC § 103
Applicant’s arguments and amendments, see Remarks/Amendments, filed 26 December 2025, with respect to the rejection(s) of claim(s) 1, 2, and 4-9 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the combination of previously cited to references Badik in view of Cohan and in further view of newly identified reference Khachaturian. The disclosures of Khachaturian disclose all elements of the newly amended limitation as newly included into the independent claim as cited to above, and therefore the rejection of all pending claims under 35 USC 103 is maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant' s disclosure. Please see attached References Cited form 892.
See Gold (20250017425) for disclosures related to methods of analyzing blood flow in a subject by the collection of skin related image data and the analysis of the images to determine blood flow characteristics. See at least paras. [16]-[43].
See Godghase et al. (20230362470) for disclosures related to the capturing of video movements of objects and the processing of the collected data to result in the interpretation of a wide variety of contours related to detected motions. See at least paras. [43]-[85].
See Cheng et al. (10,956,719) for disclosures related to the identification of facial information from detected and collected images and the determination of facial elements including landmarks, features, identification, and a range of other information. See at least Pages 1-3.
See Lai et al. (20190216333) for disclosures related to capturing facial digital images and determining blood circulation information as well as the determination of labels as related to blood circulation levels and the determination of a health index. See at least paras. [37]-[60].
See Zouridakis et al (20080226151) for disclosures related to the implementation of devices for screening the skin of individuals in real time by the implementation of segmentation and band graph partitioning algorithms the classify a skin region as benign or malignant. See at least paras. [38]-[64].
See Kaneda et al. (20070122036) for disclosures related to the collection of images via an input unit and the collection of facially related data and the detection of facial expressions by the interpretation of detected facial feature points and the calculation of relative facial part locations. See at least paras. [81]-[127].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David Stoltenberg whose telephone number is (571) 270-3472.
The examiner can normally be reached on Monday-Friday 8:30AM to 5:00PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Kambiz Abdi, can be reached on (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300, or the examiner' s direct fax phone number is (571) 270 4472.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center at (866) 217-9197 (toll free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID J STOLTENBERG/Primary Examiner, Art Unit 3685