Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on August 10, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 11, 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018). in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.
Regarding claim 1, Hernandez-Ortega et. al. discloses device for analysing video data (Hernandez-Ortega et. al. page 76, section 3: “The main purpose of the continuous PAD module proposed in Fig. 1 consists in deciding if a video sequence contains images of real faces or images of presentation attacks.”), comprising: a first analyser arranged to execute a remote photoplethysmography measurement on video data to be analysed received as an input (Hernandez-Ortega et. al. Figure 2), comprising a separator arranged to determine areas of interest in the video data to be analysed (Hernandez-Ortega et. al. page 77, third paragraph:” Skin Detection…in this work we decided to apply the skin detector presented in [13] for getting our ROI.”), an aggregator arranged to determine a remote photoplethysmography signal from the video data to be analysed relative to each area of interest (Hernandez-Ortega et. al. page 77, fourth paragraph: “rPPG Signal Extraction. Once the skin pixels have been located (see Fig. 3(c) and (d) for examples), the next stage consists in extracting the rPPG signal from each considered segment (of Tseconds), and a computer arranged to calculate a spectral signal from the photoplethysmography signal, and to obtain therefrom one or more physiological signals (Hernandez-Ortega et. al. pages 77-78: “This method also performs a frequency analysis of the signal for magnifying the bands related to an expected human pulse (between 0.6Hz and 5Hz). 3.2 Feature Extraction In our previous work [11], used for reference, we decided to use the features from [12], where the authors transformed the signal from the spatial domain to the frequency domain using the FFT, and after that they estimated its Power Spectral Density (PSD) distribution. Two features were extracted from each color band: the maximum power response P, and the ratio R between P and the total power in the 0.6-4Hz frequency range. For this work we decided to complement these two features P and R with other discriminant features that can give us more information about the rPPG signal in the time domain, following [4]. That work processed data from 3D accelerometer sensors, but their analysis is extrapolable to our rPPG signals. The final selected features can be seen in Table 1.”), a tester arranged to receive said one or more physiological signals and to return a first human presence value (page 78, section 3.3: “Classification”), a second analyser arranged to receive the video data to be analysed and to apply to it a neural network to obtain therefrom a second human presence value (Hernandez-Ortega et. al. page 78, section 3.4: “In our experimental study we compare 4 different methods for the final stage in Figure 2.”), and to return a unified human presence value (Hernandez-Ortega et. al. figure 2 and page 79, paragraphs 2 to 4: “Mean Score”, “Confidence-Based Combination”, “Quickest Change Detection”).
However, Hernandez-Ortega et. al. fails to disclose the neural network being trained on video data similar to the video data to be analysed and sets of characteristics extracted from this video data, obtained by local analysis and/or by machine learning, and a unifier arranged to receive the first human presence value and the second human presence value.
Atoum Yousef et. al. teaches the neural network being trained on video data similar to the video data to be analysed and sets of characteristics extracted from this video data, obtained by local analysis and/or by machine learning, and a unifier arranged to receive the first human presence value and the second human presence value (Atoum Yousef et. al. figure 2 and pages 321-323, section 3). The features consisting in applying a neural network that is trained to use video data has the same advantages as those features appearing in the claimed invention, namely generating a human presence score. Thus, it would have been obvious to a person skilled in the art prior to the effective filing data of the claimed invention to have included the teachings of Atoum Yousef et. al. with the teachings of Hernandez-Ortega et. al. so that the problem of implementing the second analyser can be solved completely.
Regarding claim 11, Hernandez-Ortega et. al. further discloses a device according to claim 1, wherein the unifier is arranged to carry out an operation out of a product of the input values with weighted weights, the application of logistic regression models, a combination of the Min/max/average type, or a random forest algorithm (Hernandez-Ortega et. al. page 79, Confidence based Combination where a weighted sum of input scores is applied).
Regarding claim 13, Hernandez-Ortega et. al. further discloses a computer program comprising instructions to implement the device according to claim 1.
Regarding claim 14, Hernandez-Ortega et. al. further discloses a storage medium on which the computer program according to claim 13 is recorded.
Regarding claim 15, Hernandez-Ortega et. al. further discloses the method implemented by computer comprising receiving video data, processing them with the device according to claim 1, and returning a unified human presence value.
Claims 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018). in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.2 as applied to claim 1 above, and further in view of Liu, Yaojie et al. “Learning Deep Models for Face Anti-Spoofing: Binary or Auxiliary Supervision.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 389-398.
Regarding claim 2, Hernandez-Ortega et. al. in the combination discloses a device according to claim 1. However, the combined teachings of Hernandez-Ortega et. al. and Atoum as a whole fail to disclose wherein the separator is arranged to apply one or more out of the group comprising the Haar cascades method, a deep neural network in order to determine the contours of the face in each frame of the video data, and to divide them into areas of interest in each frame.
Liu et. al. teaches wherein the separator is arranged to apply one or more out of the group comprising the Haar cascades method, a deep neural network in order to determine the contours of the face in each frame of the video data, and to divide them into areas of interest in each frame (Liu et. al. pages 390-394, sections 2-4, loss function metric to detect fake vs. real image and minimization of loss function using depth maps of 3D representations of the face). The Haar cascades method is well-known in the art and is part of the deep neural network that allows the appropriate training of datasets captured. It would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included this with the teachings of Hernandez-Ortega et. al. to arrive at the solution and device for the described problem.
Regarding claim 3, Atoum et. al. in the combination further teaches wherein the deep neural network is retinafacemnet025_v2 or res10_300x300_ssditer_140000 (Atoum et. al., section 3, page 321, Proposed method utilizes a patch-based CNN and depth-based CNN to train a deep neural network end-to-end to learn rich appearance features capable of discriminating between live and spoof face images using patches randomly extracted from face images.).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018); Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.2; and Liu, Yaojie et al. “Learning Deep Models for Face Anti-Spoofing: Binary or Auxiliary Supervision.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 389-398. as applied to claim 2 above, and further in view of Rodriguez et. al. (US Patent US20180239955).
Regarding claim 4, the combination of Hernandez et. al., Atoum et. al. and Liu et. al. discloses a device according to claim 2. However, Hernandez et. al., Atoum et. al. and Liu et. al. fails to disclose wherein the separator is arranged to cut the video data in which the contours of the face have been determined by colorimetric analysis and/or on the basis of the recognition of a characteristic point of the face.
Rodriguez et. al. teaches wherein the separator is arranged to cut the video data in which the contours of the face have been determined by colorimetric analysis and/or on the basis of the recognition of a characteristic point of the face (Rodriguez et. al. Figures 20A, 20B, [0143]-[0160], heartbeat detection algorithm that determines a time series of color values for at least one location on the user’s skin, and determined whether the time series exhibits variations indicative of a heartbeat). This allows for a more specific target for the video data processing that allows for higher efficiency of the deep neural network for further processing. Thus, it would have been obvious to one skilled in the art prior to the effective filing date to have combined the teachings of Hernandez et. al., Atoum et. al. and Liu et. al. with the teachings of Rodriguez to specify the timeframe needed for the video data input.
Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018). in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328 as applied to claim 1 above, and further in view of Rodriguez et. al. (US Patent US20180239955).
Regarding claim 5, Hernandez-Ortega et. al. in the combination discloses a device according to claim 1. However, the combined teachings of Hernandez-Ortega and Atoum as a whole fail to disclose wherein the aggregator is arranged to determine a remote photoplethysmography signal, for each frame, from the average of the respective R, G, B components of the video data of each area of interest.
Rodriguez et. al. teaches wherein the aggregator is arranged to determine a remote photoplethysmography signal, for each frame, from the average of the respective R, G, B components of the video data of each area of interest (Rodriguez et. al. Figures 20A, 20B, [0143]-[0160], heartbeat detection algorithm that determines a time series of color values for at least one location on the user’s skin, and determined whether the time series exhibits variations indicative of a heartbeat). The green channel has been shown to have the strongest photoplethysmography signal. However, all channels can be used, as shown by Rodriguez et. al. to extract the stronger signal. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the techniques of Rodriguez et. al. with the device of Hernandez-Ortega et. al. to analyze a more robust rPPG signal.
PNG
media_image1.png
606
742
media_image1.png
Greyscale
Regarding claim 6, Rodriguez et. al. in the combination further discloses a device according to claim 5, wherein the aggregator is further arranged to determine a remote photoplethysmography signal from a normalization and from an infinite or finite impulse response band-pass filtering applied to the average of the respective R, G, B components of the video data of each area of interest (Rodriguez et. al. [0152]-[0158] where a filter is applied, [0461]).
Regarding claim 7, Rodriguez et. al. in the combination further discloses a device according to claim 5, wherein the aggregator is further arranged to determine a remote photoplethysmography signal from the combination of the signals obtained from the respective R, G, B components of the video data of each area of interest (Rodriguez et. al. [0460]).
PNG
media_image2.png
594
446
media_image2.png
Greyscale
PNG
media_image3.png
727
580
media_image3.png
Greyscale
Claim 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018) in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328 as applied to claim 1 above, and further in view of Li, Xiaobai et al. “Remote Heart Rate Measurement from Face Videos under Realistic Situations.” 2014 IEEE Conference on Computer Vision and Pattern Recognition (2014): 4264-4271.
Regarding claim 8, Hernandez-Ortega et. al. in the combination discloses a device according to claim 1 wherein the computer is arranged to receive the remote photoplethysmography signal and to obtain therefrom one or more physiological signals (Hernandez-Ortega et. al. section 3.2). However, the combination of Hernandez-Ortega and Atoum as a whole does not disclose doing this by applying a Welch algorithm or a fast Fourier transform and by obtaining one or more spectra, and by determining one or more physiological data chosen from a group comprising the cardiac rhythm, the respiratory rhythm, or the variation in cardiac frequency.
Li et. al. teaches applying a Welch algorithm or a fast Fourier transform and by obtaining one or more spectra, and by determining one or more physiological data chosen from a group comprising the cardiac rhythm, the respiratory rhythm, or the variation in cardiac frequency (Li et. al. section 3, use of Welch algorithm is described herein). This aspect is important to signal processing to reduce noise from the recorded physiological data captured. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included this element of Li et. al. with the teachings of Hernandez-Ortega et. al. so that the data collected from human subjects is cleaner.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018) in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.2 as applied to claim 1 above, and further in view of Shen et. al. (Chinese Patent 112329696 A).
Regarding claim 9, Hernandez-Ortega et. al. in the combination discloses a device according to claim 1. However, Hernandez-Ortega et. al. and Atoum et. al. fails to disclose wherein the tester is a neural network which has been trained with a database of videos labelled to indicate a human presence or not, the data provided to the input layer of this neural network being formed by the physiological data signal determined for each of these videos.
Shen et. al. teaches wherein the tester is a neural network which has been trained with a database of videos labelled to indicate a human presence or not, the data provided to the input layer of this neural network being formed by the physiological data signal determined for each of these videos (Shen et. al. Figure 5, human face living body detection system, sample collecting module M100, for collecting the training sample image, the training sample image has a mark whether is living body). It is important to the claimed invention to have a discriminator that allows for the distinction of whether the video image data has a living body presence or a fake presence that indicates a spoofing attempt. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have included the teachings of Shen et. al. with the teachings of Hernandez-Ortega et. al. and Atoum et. al. to have a more effective anti-spoofing device.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018) in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.2 as applied to claim 1 above, and further in view of Xu Zhenqi et. al.: “Learning temporal features using LSTM-CNN architecture for face anti-spoofing”, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), IEEE, 3 November 2015 (2015-11-03), pages 141-145, XP032910078, doi: 10.1109/ACPR.2015.7486482.
Regarding claim 10, Hernandez-Ortega et. al. in the combination discloses a device according to claim 1. However, the combination of Hernandez-Ortega et. al. and Atoum as a whole fails to disclose wherein the second analyser comprises on the one hand a neural network of the LSTM type which receives as an input facial characteristics extracted from the video data by applying an extraction of the LBP type and/or an extraction of the SURF type, and which is trained with a database of videos labelled to indicate a human presence or not, and on the other hand a deep neural network based on the MobilenetV3 or ResNext architecture comprising at the output a dense layer of neurons normalized by a layer applying the Softmax function, the main cost function being able to mix cross-entropy loss, focal loss, label softening and maximum entropy loss, and optionally one or more auxiliary cost functions based on a depth map, the rPPG signal, attributes relative to the video quality, attributes relative to the color of the skin, and attributes relative to the type of apparatus.
Zhenqi et. al. teaches wherein the second analyser comprises on the one hand a neural network of the LSTM type which receives as an input facial characteristics extracted from the video data by applying an extraction of the LBP type and/or an extraction of the SURF type (Zhenqi et. al., pages 141-143, Figure 4, sections 1-3 where a Softmax function is applied to the LSTM-CNN architecture with speed up robust features (SURF) and local binary patterns (LBP)), and which is trained with a database of videos labelled to indicate a human presence or not, and on the other hand a deep neural network based on the MobilenetV3 or ResNext architecture comprising at the output a dense layer of neurons normalized by a layer applying the Softmax function (Zhenqi et. al., pages 141-143, sections 1-3 where a Softmax function is applied to the LSTM-CNN architecture with speed up robust features (SURF) and local binary patterns (LBP)), the main cost function being able to mix cross-entropy loss, focal loss, label softening and maximum entropy loss, and optionally one or more auxiliary cost functions based on a depth map, the rPPG signal, attributes relative to the video quality, attributes relative to the color of the skin, and attributes relative to the type of apparatus.
Zhenqi et. al. uses a technique that is one of several options for which a person skilled in the art can use to solve the problem stated in the claimed invention. This method is simple and efficient for face anti-spoofing problems. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to have combined the teachings of Hernandez-Ortega et. al. and Zhenqi et. al. to have a more robust CNN architecture for analyzing the video data.
PNG
media_image4.png
442
422
media_image4.png
Greyscale
Claim 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hernandez-Ortega, J. et al. “Continuous Presentation Attack Detection in Face Biometrics Based on Heart Rate.” FFER/DLPR@ICPR (2018) in view of Atoum, Yousef et al. “Face anti-spoofing using patch and depth-based CNNs.” 2017 IEEE International Joint Conference on Biometrics (IJCB) (2017): 319-328.2, X. Niu, H. Han, S. Shan and X. Chen, "SynRhythm: Learning a Deep Heart Rate Estimator from General to Specific," in 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 2018, pp. 3580-3585, doi: 10.1109/ICPR.2018.8546321., Rinku Datta Rakshit et. al.: “Face Spoofing and Counter-Spoofing: A Survey of State-of-the-art”, Transactions on Machine Learning and Artificial Intelligence, Vol. 5, No. 2, 9 May 2017 (2017-05-09), pages 31-73, XPO=055559503, doi: 10.14738/TMLAI.52.3130., and Xu Zhenqi et. al.: “Learning temporal features using LSTM-CNN architecture for face anti-spoofing”, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), IEEE, 3 November 2015 (2015-11-03), pages 141-145, XP032910078, doi: 10.1109/ACPR.2015.7486482.
Regarding claim 12, Hernandez-Ortega et. al. discloses a device for analysing video data (Hernandez-Ortega et. al. page 76, section 3: “The main purpose of the continuous PAD module proposed in Fig. 1 consists in deciding if a video sequence contains images of real faces or images of presentation attacks.”), comprising an aggregator arranged to determine a remote photoplethysmography signal from the video data to be analysed relative to each area of interest and coupled with a neural network arranged to extract remote photoplethysmography characteristics (Hernandez-Ortega et. al. page 77, fourth paragraph: “rPPG Signal Extraction. Once the skin pixels have been located (see Fig. 3(c) and (d) for examples).
However, Hernandez-Ortega et. al. fails to disclose an analyser arranged to receive the video data and to apply to it a neural network to obtain therefrom deep characteristics, the neural network being trained on video data similar to the video data to be analysed and sets of characteristics extracted from this video data, obtained by local analysis and/or by machine learning, a separator arranged to determine areas of interest in the video data to be analysed, extract characteristics of areas of interest coupled with a neural network arranged to extract facial characteristics, a neural network applying a Softmax function to the deep characteristics, to the characteristics of areas of interest, to the facial characteristics and to the remote photoplethysmography characteristics to obtain therefrom a characteristic map score, a computer arranged to calculate a remote photoplethysmography score from the data coming from the aggregator or from the separator, an analyser arranged to calculate a luminosity score from an image processing that analyses the luminosity of the video data by seeking a colorimetric deviation in order to characterize the probability that the video data was refilmed, and a unifier arranged to receive the characteristic map score, the remote photoplethysmography score and the luminosity score, and to return a unified human presence value.
Atoum Yousef et. al. teaches a unifier arranged to receive the characteristic map score, the remote photoplethysmography score and the luminosity score, and to return a unified human presence value (Atoum Yousef et. al. figure 2 and pages 321-323, section 3).
Niu et. al. teaches an analyser arranged to receive the video data and to apply to it a neural network to obtain therefrom deep characteristics (Niu et. al. p.3581-3583, sections II-III, approach using a deep heart rate estimator, spatial-temporal map for representing the heart rate signals). Niu et. al. further teaches to obtain therefrom a characteristic map score (Niu et. al. p.3581-3583, sections II-III, approach using a deep heart rate estimator, spatial-temporal map for representing the heart rate signals), a computer arranged to calculate a remote photoplethysmography score from the data coming from the aggregator or from the separator.
Rakshit et. al. teaches the neural network being trained on video data similar to the video data to be analysed and sets of characteristics extracted from this video data (Rakshit et. al. sections 4.4 and 5.4: image quality analysis-based technique extracted from videos to determine different qualities of fake faces), obtained by local analysis and/or by machine learning, a separator arranged to determine areas of interest in the video data to be analysed, extract characteristics of areas of interest coupled with a neural network arranged to extract facial characteristics (Rakshit et. al. sections 4.4 and 5.4: different qualities of fake faces include specular reflection features, blurriness features, color diversity features, chromatic moment features, edge information), and an analyser arranged to calculate a luminosity score from an image processing that analyses the luminosity of the video data by seeking a colorimetric deviation in order to characterize the probability that the video data was refilmed (Rakshit et. al. sections 4.4 and 5.4, image quality analysis-based technique that captures 4 different qualities of fake faces).
Zhenqi et. al. teaches a neural network applying a Softmax function to the deep characteristics (Zhenqi et. al. applies the Softmax function and a long short-term memory (LSTM)-CNN architecture), to the characteristics of areas of interest, to the facial characteristics and to the remote photoplethysmography characteristics.
The additional features of claim 12 relating to well-known uses of neural networks for extracting video characteristics as evidenced by by Niu et. al., Rakshit et. al., Zhenqi et. al., and Atoum Yousef et. al as noted above. The analysis of the brightness of the video data seeking a colorimetric drift in order to characterize the probability that said video data has been refilmed, correspond to techniques well-known to a person skilled in the art as noted in these prior arts. Thus, it would have been obvious to one skilled in the art prior to the effective filing date of the claimed invention to combine the teachings of Atoum Yousef et. al., Niu et.al., Rakshit et. al., and Zhenqi et. al., with the device disclosed by Hernandez-Ortega et. al. to include the additional features extracted using neural networks to complete the solution as presented by the claimed invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA YIFANG LIN whose telephone number is (571)272-6435. The examiner can normally be reached M-F 7:00am-6:15pm, with optional day off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JESSICA YIFANG LIN/Examiner, Art Unit 2668 January 28, 2026
/VU LE/Supervisory Patent Examiner, Art Unit 2668