DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Final Office Action is in response to the Amendment and Remarks filed 10/16/2025, wherein:
Claims 1 and 11 are amended; and
Claims 1-4,6-14 and 16-20 are currently pending and considered herein.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 2021/0182539 A1 to Rassool, hereinafter “Rassool,” in view of U.S. 2015/0347734 A1 to Beigi, hereinafter “Beigi,” in view of U.S. 2014/0188770 A1 to Agrafioti et al., hereinafter “Agrafioti,” in view of U.S. 2023/0328417 A1 to Jumbe et al., hereinafter “Jumbe,” in view of U.S. 2011/0301436 A1 to Teixeira, hereinafter “Teixeira” and further in view of CN 110378394 A to Cai et al., hereinafter “Cai.”
Regarding claim 1, Rassool discloses to generate a biometric identification signature of a human subject (See Rassool at least at Abstract; Paras. [0004], [0025]-[0026]), wherein generating the biometric identification signature further comprises: generating the biometric identification signature as a function of the machine-learning model (See id. at least at Paras. [0025]-[0026], [0041]-[0042], [0049]-[0051], [0061]; Fig. 1A). Rassool further discloses to determine a first degree of similarity between the plurality of the first physiological sample set and the biometric identification signature (See id. at least at Paras. [0026], [0029]-[0030], [0032]-[0034], [0067]; Figs. 2, 6); calculate an identity quantifier as a function of the first degree of similarity (See id. at least at Paras. [0026], [0032]-[0034], [0055]-[0056], [0071]-[0072]); and authenticate the first physiological sample set to the human subject, as a function of the identity quantifier (See id. at least at Paras. [0032]-[0034], [0042], [0046]-[0047], [0051]).
Rassool may not specifically describe but Beigi teaches A system for authentication of physiological data for use in telemedicine, the system comprising a computing device configured to: initiate a communication interface between the computing device and a client device, wherein the communication interface includes an audiovisual streaming protocol (See Beigi at Para. [0028]-[0043] (device identifiers and communication including protocol), [0063], [0067]-[0068] (face recognition video captured), [0102] (video stream and computer interface); receive, using the audiovisual streaming protocol, a first physiological sample set (See id. at least at Paras. [0028]-[0043], [0060]-[0061], [0077]; Figs. 1, 4, 5, 22, 23 (images including picture and video of patients’ faces are physiological sample sets and help to create biometric identification signatures), [0102] (the video is streamed between computing devices); Claim 10); wherein the biometric identification signature comprises one or more of a hand geometry scan, vein scan, and an iris scan (See id. at least at Paras. [0005], [0020] (“Many different biometric methods may be used, such as those listed in Section 1.3. Some such techniques are Speaker Recognition, Image-Based or Audio-Based Ear Recognition, Face Recognition, Fingerprint Recognition, Palm Recognition, Hand-Geometry Recognition, Iris Recognition, Retinal Scan, Thermographic Image Recognition, Vein Recognition.”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Rassool to incorporate the teachings of Beigi and provide audiovisual streaming protocol for physiological sample sets. Beigi is directed to a multifactor authentication system using biometrics (See Beigi at Abstract). Incorporating the biometric authentication as in Beigi with the facial recognition using a motion vector trained model of Rassool would thereby increase the applicability, utility, and efficacy of the claimed methods and systems of biometric identification in telemedicine using remote sensing.
Rassool as modified by Beigi may not specifically describe but Agrafioti teaches wherein generating the biometric identification signature further comprises: using a feature learning algorithm performing a clustering analysis, the physiological data into a plurality of physiological data set clusters comprising sub-combinations of physiological data by determining a degree of similarity index value, wherein the computing device is further configured to iteratively identify, using the feature learning algorithm, physiological data sets clusters to categorize the physiological data to (See Agrafioti at least at Paras. [0063], [0092]-[0093], [0121], [0124], [0157], [0166]-[0167], [0196] (clustering and combinations); Claim 1).
Rassool as modified by Beigi and Agrafioti may not specifically describe but Jumbe teaches determining a highly divergent category of data based on a degree of similarity index value between the physiological data and a physiological data set cluster of the plurality of physiological data set clusters (See Jumbe at least at Paras. [0094]-[0096] (degree of transparency and divergence) (“The overall structural machine learning task is to complete a sufficiently large fragment of the overall biometric data jigsaw as to be able to yield a biosignature identifier with a very low collision probability.”), [0196]-[0200], [0250]-[0252] (identification markers extracted as similarity index values); Figs. 3, 4, 13); and receiving a subject signature training data comprising a plurality physiological data entries corresponding to the highly divergent category of data (See id.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Rassool and Beigi to incorporate the teachings of Agrafioti and Jumbe and provide machine learning models specifically trained and using biometric identification clustering techniques. Agrafioti is directed to identity recognition using machine learning and physiological biometric signals (See Agrafioti at Abstract). Jumbe relates to secure identification methods and systems with biometric data. (See Jumbe at Abstract). Incorporating the machine learning biometric techniques as in Agrafioti with the secure identification and divergence as in Jumbe, the multifactor authentication using biometrics of Beigi and the biometric recognition and trained models as in Rassool would thereby increase the applicability, utility, and efficacy of the claimed methods and systems of biometric identification in telemedicine using remote sensing.
Rassool as modified by Beigi, Agrafioti and Jumbe may not specifically describe but Teixeira teaches receiving physiological data corresponding to the human subject, wherein the physiological data comprises cardiovascular data captured using an oximetry device (See Teixeira at least at Abstract; Para. [0036]; Claims 5, 6, 16; Figs. 1-5, 7-10).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Rassool, Beigi, Agrafioti and Jumbe to incorporate the teachings of Teixeira and provide physiological data captured using an oximetry device. Teixeira is directed to processing physiological sensor data including cardiovascular data using machine learning (See Teixeira at Abstract). Incorporating the cardiovascular model and techniques as in Teixeira with the machine learning and biometric techniques as in Agrafioti, the secure identification and divergence as in Jumbe, the multifactor authentication using biometrics of Beigi and the biometric recognition and trained models as in Rassool would thereby increase the applicability, utility, and efficacy of the claimed methods and systems of biometric identification in telemedicine using remote sensing.
The references may not specifically describe but Cai teaches iteratively training a machine-learning model as a function of a machine-learning process and the subject signature data, wherein iteratively training the machine-learning model comprises: training the machine-learning model using training data as input layer of nodes, wherein the training data comprises at least a physiological data entry (See Cai at least at Abstract; Claim 1 (“A multi-physiological data fusion analysis method based on a neural network is characterized by comprising the following steps: adopting a neural network model to construct an initial model of fusion analysis of multiple physiological data; obtaining a plurality of groups of multi-physiological data vectors and corresponding high-risk, low-risk and normal three-state data, inputting the multi-physiological data vectors as training samples, outputting the high-risk, low-risk and normal three-state data as analysis output, inputting the high-risk, low-risk and normal three-state data into an initial model of multi-physiological data fusion analysis, and performing iterative computation and processing on a plurality of groups of preprocessed training samples with different numerical values by using a K-Means algorithm to obtain an initial central value c of a hidden layer neuroni And an initial width bi Then using a supervised learning algorithm to carry out initial central value c on the hidden layer neuron […] modifying the number of hidden layer nodes of the neural network to wi Training the initial model of the multi-physiological data fusion analysis to obtain a plurality of multi-physiological data fusion analysis correction models, wherein n is iteration times, and i is the number of ith hidden layer nodes.”), Claims 2-5, 8); adjusting one or more connections and one or more weights between nodes in adjacent layers of the machine-learning model (See id. at least at Abstract; Claim 1 (“Respectively adjusting, and respectively modifying the central value and the width of the neural network hidden layer neuron into ci(n +1) and bi(n +1), and a weight vector w between the hidden layer and the output layer of the neural network in the initial modeli(n) processing to obtain weight vector wi(n +1), modifying the number of hidden layer nodes of the neural network to wi Training the initial model of the multi-physiological data fusion analysis to obtain a plurality of multi-physiological data fusion analysis correction models, wherein n is iteration times, and i is the number of ith hidden layer nodes.”), Claims 2-5 (“initial center value c for hidden layer neurons using supervised learning algorithmsi Adjustment of As the central value c of the hidden layer neuroni The specific method of (n +1) is as follows: s8, calculating the center value c by the following formula i(n+1):wherein n is the number of iterations; b is a bias term; n is1The learning efficiency of the weight parameter between the center and the output layer is 0.05.”), Claim 8); and retraining the machine-learning model as a function of the adjusted connections and weights to produce output layers of nodes (See id. at least at Abstract; Claims 1, 2, 4 (“using supervised learning algorithm, the weight vector w from hidden layer to output layer i(n) processing to obtain vector wi Specific method of (n +1)Comprises the following steps: s6, defining an error cost function: wherein E is the error of a certain output point; n is the total number of training sample groups; e.g. of the type jIs the error signal for the jth set of training samples, which is the error between the obtained result and the expected result; e.g. of the typejIs defined as: where k is the total number of hidden nodes, wiIs the ith weight vector, xjIs the jth set of training samples, djMeans that the calculated jth training sample group and the initial central value ciDistance between, hiRefers to the training sample xj Expected distance from the hidden layer center value, G refers to Green function, Xj Refers to the jth training sample set; s7, calculating a weight vector w between the hidden layer and the output layer by the following formula i (n+1).”), Claims 5-8).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Rassool, Beigi, Agrafioti, Jumbe and Teixeira to incorporate the teachings of Cai and provide machine learning models, weights and iterative adjustments and using cardiovascular data. Cai is directed to a neural network-based multi-physiological data fusion analysis method (See Cai at Abstract). Incorporating the neural-network model, nodes and multi-physiological data fusion analysis of Cai with the cardiovascular model and techniques as in Teixeira, the machine learning and biometric techniques as in Agrafioti, the secure identification and divergence as in Jumbe, the multifactor authentication using biometrics of Beigi and the biometric recognition and trained models as in Rassool would thereby increase the applicability, utility, and efficacy of the claimed methods and systems of biometric identification in telemedicine using remote sensing.
Regarding claim 2, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the subject signature training data further comprises a plurality of category descriptors correlated to physiological entries (See Rassool at least at Paras. [0040], [0044], [0048]).
Regarding claim 3, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the subject signature training data classifies physiological entries corresponding to the human subject (See id. at least at Paras. [0032]-[0034], [0040], [0044], [0048]; Fig. 6).
Regarding claim 4, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the computing device is further configured to: generate a second biometric identification signature of the human subject, as a function of the machine-learning model (See id. at least at Paras. [0040], [0044], [0048]; Fig. 6); determine a second degree of similarity between a second physiological sample set and the second biometric signature (See id. at least at Paras. [0032]-[0034], [0051], [0055]-[0056], [0067], [0071]-[0072]; Figs. 5, 6); and calculate the identity quantifier as a function of the first degree of similarity and the second degree of similarity (See id.).
Regarding claim 6, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the computing device is further configured to receive the first physiological sample set from a remote sensor (See Rassool at least at Paras. [0005], [0028], [0030]-[0033], [0094]; Figs. 1, 2).
Regarding claim 7, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the first physiological sample set further comprises image data (See id. at least at Paras. [0005], [0028], [0030]-[0033], [0094]; Figs. 1, 2).
Regarding claim 8, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Beigi further teaches wherein the first physiological sample set further comprises audio data (See Beigi at least at Paras. [0020], [0060]-[0061]; Figs. 22, 23).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the disclosure of Rassool, Agrafioti, Jumbe, Teixeira and Cai to incorporate the teachings of Beigi and provide audio data for the physiological sample set. Beigi is directed to a multifactor authentication system using biometrics (See Beigi at Abstract). Incorporating the biometric authentication as in Beigi with the neural-network model, nodes and multi-physiological data fusion analysis of Cai, the cardiovascular model and techniques as in Teixeira, the machine learning and biometric techniques as in Agrafioti, the secure identification and divergence as in Jumbe and the biometric recognition and trained models as in Rassool would thereby increase the applicability, utility, and efficacy of the claimed methods and systems of biometric identification in telemedicine using remote sensing.
Regarding claim 9, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein the computing device is further configured to: divide a physiological sample set from the human subject into a plurality of physiological sample subsets (See Rassool at least at Abstract; Paras. [0025], [0058], [0060]-[0062], [0073]-[0074]; Fig. 7); generate feature learning training data comprising the plurality of physiological sample subsets (See id. at least at Abstract; Paras. [0030]-[0032], [0040], [0060]-[0062], [0073]-[0074]; Fig. 7); train feature learning model, as a function of the feature learning training data and a feature learning algorithm (See id. at least at [0062]-[0068], [0071]-[0072]; Figs. 5, 6); and correlate physiological subsets from the plurality of physiological subsets to one another, as a function of the feature learning model (See id. at least at Paras. [0037], [0086], [0091]).
Regarding claim 10, Rassool as modified by Beigi, Agrafioti, Jumbe, Teixeira and Cai teaches all the limitations of claim 1 and Rassool further discloses wherein determining the first degree of similarity further comprises: generating a distance metric between the first physiological sample set and the at least a biometric identification signature; and determining the first degree of similarity as a function of the distance metric (See id. at least at Paras. [0026], [0044]-[0045], [0071]-[0072]; Figs. 5, 6).
Regarding claims 11-14 and 16-20, claims 11-14 and 16-20 recite substantially the same limitations as included in claims 1-4 and 6-10, respectively. Thus, claims 11-14 and 16-20 are rejected under the same grounds of rejection and for the same reasoning as applied to claims 1-4 and 6-10, above.
Response to Arguments
Applicant’s Amendment and Remarks filed October 16, 2025 have been fully considered, but they are not entirely persuasive. The following explains why:
Applicant’s arguments pertaining to subject matter eligibility are persuasive. The rejection under 35 U.S.C. §101 has been considered under the 2019 and 2024 Patent Subject Matter Eligibility Guidance (PEG) and recent instruction from Director Squires on 12/04/2025, and has been withdrawn. The arguments at pages 7-15 of Applicant’s Response are persuasive and the rejection is withdrawn.
Applicant’s arguments pertaining to prior art rejections are not persuasive. The claims have been addressed with regard to the 35 U.S.C. §103 rejection discussed above. The arguments pertaining to prior art references of the Applicant’s Remarks at Pages 16-19 are moot in light of at least reference to Beigi, discussed above. (See Beigi at least at Paras. [0005], [0020] (“Many different biometric methods may be used, such as those listed in Section 1.3. Some such techniques are Speaker Recognition, Image-Based or Audio-Based Ear Recognition, Face Recognition, Fingerprint Recognition, Palm Recognition, Hand-Geometry Recognition, Iris Recognition, Retinal Scan, Thermographic Image Recognition, Vein Recognition.”)). As such, it is submitted that the cited prior art, including those identified by Applicant, in the same field of endeavor, i.e., techniques for biometric identification using machine learning models and mathematical training, teaches and/or suggests all of the limitations of the pending claims under a broad and reasonable interpretation thereof.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM T. MONTICELLO whose telephone number is (313)446-4871. The examiner can normally be reached M-Th; 08:30-18:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MARC Q. JIMENEZ can be reached at (571) 272-4530. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM T. MONTICELLO/Examiner, Art Unit 3681
/MARC Q JIMENEZ/Supervisory Patent Examiner, Art Unit 3681