Prosecution Insights
Last updated: April 19, 2026
Application No. 19/057,766

UTILIZING ONE OR MORE MODELS TO AUDIT A PROCESS

Non-Final OA §101§103§112§DP
Filed
Feb 19, 2025
Examiner
LE, LINH GIANG
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mynatek Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
61%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
444 granted / 675 resolved
+13.8% vs TC avg
Minimal -5% lift
Without
With
+-5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
19 currently pending
Career history
694
Total Applications
across all art units

Statute-Specific Performance

§101
33.5%
-6.5% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 675 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant This communication is in response to application filed 2/19/2025. It is noted that application is a continuation of 18/747,154 filed 6/18/2024 (now US Patent No. 12,266,446) that is a CIP of 18/671,808 filed 5/22/2024 (now US Patent No. 12,380,995) which is a continuation of 18/236,824 filed 8/22/2023 (now US Patent No. 12,033,748) which claims priority to Provisional Application No. 63/532,729 filed 8/15/2023. Claims 1-20 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1–20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1, 21, and 22 of U.S. Patent No. 12,266,446. The instant claims are not patentably distinct from the claims of the ’446 patent because the ’446 patent claims the same basic invention: a plurality of sensors obtaining data associated with one or more workers performing a task, including an image sensor and a second sensor; computer vision extracting features/patterns and recognizing gestures/actions; generating a feature vector including second-sensor values and extracted-feature values; inputting the feature vector to one or more models to determine whether the workers correctly performed the task; and outputting a notification indicating whether the workers correctly performed the task. The instant claims differ primarily in broadening the second sensor from the specifically claimed thermal sensor of the ’446 patent to a broader “different type of sensor,” and in adding language regarding analyzing images/specifications associated with the environment and determining the plurality of sensors for the environment. These differences would have been obvious because the ’446 patent itself teaches analyzing images and specifications to determine suitable sensors for monitoring and further teaches determining which sensors are relevant to a task and generating a feature vector from the relevant sensors. Accordingly, the instant claims merely claim an obvious variant of the invention claimed in the ’446 patent and are therefore unpatentable on the ground of nonstatutory obviousness-type double patenting. A terminal disclaimer may be filed to obviate this rejection. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the limitation of determine the plurality of sensors for the environment to monitor a performance of the one or more workers performing the task in the environment as taught in independent claims 1, 19, and 20 is unclear. It is unclear if a number or type of sensors is being determined or what are the limits of the determination. Clarification is needed. Dependent claims 2-18 incorporate the deficiencies of the independent claim from which they depend upon. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-18 are drawn to a system for utilizing one or more models to audit a process, which is within the four statutory categories (i.e. machine). Claim 19 is drawn to a method for utilizing one or more models to audit a process, which is within the four statutory categories (i.e. process). Claim 20 is drawn to a computer program product for utilizing one or more models to audit a process, which is within the four statutory categories (i.e. article of manufacture). Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: A system, comprising: a plurality of sensors configured to obtain data associated with one or more workers performing a task in an environment, wherein a first sensor of the plurality of sensors is an image sensor and a second sensor of the plurality of sensors is a different type of sensor than the first sensor; and a processor coupled to the plurality of sensors and configured to: analyze images and specifications associated with the environment; determine the plurality of sensors for the environment to monitor a performance of the one or more workers performing the task in the environment; process the data associated with the one or more workers, wherein computer vision is utilized to extract one or more features or patterns from image data obtained from the image sensor and to recognize gestures and/or actions performed by the one or more workers; generate a feature vector that includes one or more sensor values associated with the second sensor and corresponding values associated with the extracted features or the extracted patterns; input the feature vector to one or more models to determine whether the one or more workers correctly performed the task; and output a notification indicating whether the one or more workers correctly performed the task based on an output of the one or more models. These recited underlined limitations fall within the "Certain Methods of Organizing Human Activities" grouping of abstract ideas as it relates to managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II). The limitations of obtaining data; analyzing images and specifications; determining the plurality of sensors for the environment; processing the data; and outputting a notification as drafted and detailed above, are steps that, under its broadest reasonable interpretation, recites steps for organizing human interactions. The claimed invention is directed to receiving information, processing the data and outputting a notification. This is a concept relating to tracking or filtering information. Tracking information or filtering content has been found to be an abstract idea and a method of organizing human behavior. This is a method of auditing a process thus falling into one category of abstract idea. That is other than reciting “a processor” language, nothing in the claim element precludes the steps from describing concepts related to receiving data related to a process, auditing the data and generating a notification. If a claim limitation, under its broadest reasonable interpretation, covers concepts related to interpersonal and intrapersonal activities then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. In the present case, the additional limitations beyond the above-noted at least one abstract idea are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”): A system, comprising: a plurality of sensors configured to obtain data associated with one or more workers performing a task in an environment, wherein a first sensor of the plurality of sensors is an image sensor and a second sensor of the plurality of sensors is a different type of sensor than the first sensor; and a processor coupled to the plurality of sensors and configured to: analyze images and specifications associated with the environment; determine the plurality of sensors for the environment to monitor a performance of the one or more workers performing the task in the environment; process the data associated with the one or more workers, wherein computer vision is utilized to extract one or more features or patterns from image data obtained from the image sensor and to recognize gestures and/or actions performed by the one or more workers; generate a feature vector that includes one or more sensor values associated with the second sensor and corresponding values associated with the extracted features or the extracted patterns; input the feature vector to one or more models to determine whether the one or more workers correctly performed the task; and output a notification indicating whether the one or more workers correctly performed the task based on an output of the one or more models. For the following reasons, the Examiner submits that the above identified additional limitations do not integrate the above-noted at least one abstract idea into a practical application. The additional elements (i.e. the limitations not identified as part of the abstract idea) amount to no more than limitations which: amount to mere instructions to apply an exception, see MPEP 2106.05(f). the recitations performing the functions by the at least one processor amounts to merely invoking a computer as a tool to perform the abstract idea, e.g. see paragraph [0100] of the present Specification. the recitations of utilizing computer vision; generating a feature vector; and inputting the feature vector into a model recite only the idea of a solution or outcome (i.e. claim fails to recite details of how a solution to a problem is accomplished). in order to transform a judicial exception into a patent-eligible application, the additional element or combination of elements must do "‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’". Examiner submits that these limitations amount to merely using software to tailor information and provide it to the user on a generic computer. The limitations teach training a machine learning model in a generic manner. Applicant does not provide adequate evidence or technical reasoning on how the process improves the efficiency of the computer and is beyond conventional use of components, as opposed to the efficiency of the process, or of any other technological aspect of the computer. generally link the abstract idea to a particular technological environment or field of use, see MPEP 2106.05(h)– for example, the recitation of obtaining data from sensors and processing the data by a processor merely limits the abstract idea the environment of a computer connected to sensors. Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Independent claim 1 does not include additional elements that are sufficient to amount to “significantly more” than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and generally linking the abstract idea to a particular technological environment or field of use and the same analysis applies with regards to whether they amount to “significantly more.” Therefore, the additional elements do not add significantly more to the at least one abstract idea. As per claim 19, the claim teaches limitations similar to claim 1 and the same abstract idea (“certain methods of organizing human activity”) for the same reasons as stated above. Independent claim 19 is directed to an abstract idea. As per claim 20, the claim teaches limitations similar to claim 1 and the same abstract idea (“certain methods of organizing human activity”) for the same reasons as stated above. Claim 20 further teaches a computer program product comprising instructions to perform the functionality taught by claim 1. These limitations of a computer program product as generally recited, amount to mere instructions to apply an exception, see MPEP 2106.05(f) and generally link the abstract idea to a particular technological environment or field of use, see MPEP 2106.05(h). Independent claim 20 is directed to an abstract idea. The following dependent claims further the define the abstract idea or are also directed to an abstract idea itself: Dependent claims 6-13 further teach details about the “extracted features”; describe the “notification”. These features further define the at least one abstract idea, “certain methods of organizing human activity” (and thus fail to make the abstract idea any less abstract). In relation to claims 4, 5, 14, 15, 17 and 18,these claims specify preprocessing data; extracting features; receiving an indication to recalibrate and recalibrating the sensors; affixing an item to the object; and enhancing the data; which are steps that directed to methods of organizing human activity, under its broadest reasonable interpretation, as they cover interactions between people or managing personal behavior or relationships. The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below: Claim 2: This claim specifies the types of sensors which thus does no more than generally link use of the abstract idea to a particular technological environment or field of use without altering or affecting how the at least one abstract idea is performed (see MPEP § 2106.05(e)). Claim 3: This claim recites receiving data which therefore merely represent insignificant extra-solution (data output) activity (see MPEP § 2106.05(g)). Claims 16: This claim generally recites machine learning models which thus amount to mere instructions to apply an exception by invoking the computer as a tool OR reciting the idea of a solution (i.e. claim fails to recite details of how a solution to a problem is accomplished) or outcome (see MPEP § 2106.05(f)). The dependent claims further do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application. Therefore, claims 1-20 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Grantcharov (2018/0122506) in view of Sawhney (2023/0019745). As per claim 1, Grantcharov teaches a system, comprising: a plurality of sensors configured to obtain data associated with one or more workers performing a task in an environment, wherein a first sensor of the plurality of sensors is an image sensor and a second sensor of the plurality of sensors is a different type of sensor than the first sensor; (Grantcharov; paras. [0122], [0133], [0156] and Fig. 16 and para. [0054] triggering collection of real-time medical or surgical data streams by smart devices including cameras, sensors, audio devices, and patient monitoring hardware, the medical or surgical data relating to a real-time medical procedure within an operating or clinical site), and a processor (Grantcharov; paras. [0119], [0120]) coupled to the one or more sensors and configured to: analyze images and specifications associated with the environment (Grantcharov; para. [0037] the control station configured to control processing of the data streams; para. [0071] the network server or encoder configures the multi-nodal perception engine for filtering the time-stamped clinical events within the session container file); determine the plurality of sensors for the environment to monitor a performance of the one or more workers performing the task in the environment (Grantcharov; para. [0163] The data platform 10 may be a modular system and not limited in terms of data feeds—any measurable parameter in the OR/patient intervention areas (e.g., data captured by various environmental acoustic, electrical, flow, angle/positional/displacement and other sensors, wearable technology video/data stream, etc.) may be added to the data platform 10.); process the data associated with the one or more workers (Grantcharov; One or more aspects of embodiments may include analyzing data using validated rating tools which may look at different aspects of a clinical intervention.), and output a notification indicating whether the one or more workers correctly performed the task based on an output of the one or more models. (Grantcharov; para. [0326] the confidence level and/or confidence score is stored in metadata and incorporated into instruction sets for notifications of then in a particular surgical procedure the data feeds should be reviewed to assess the presence and/or absence of technical errors and/or events). Grantcharov; paras. [0160], [0310], [0315], [0316] teaches a perception engine may be configured to filter content, categorize, profile, extract features, uncover underlying data behaviors and provide evidence of correlation of events in complex multi-variable processes and timelines. However, Grantcharov does not expressly teach wherein computer vision is utilized to extract one or more features or patterns from image data obtained from the image sensor and to recognize gestures and/or actions performed by the one or more workers; generate a feature vector that includes one or more sensor values associated with the second sensor and corresponding values associated with the extracted features or the extracted patterns; and input the feature vector to one or more models to determine whether the one or more workers correctly performed the task These features were old and well-known in the art as evidenced by Sawhney. IN particular, Sawhney teaches: wherein computer vision is utilized to extract one or more features or patterns from image data … and to recognize gestures and/or actions Sawhney para. [0028] teaches tracking user state from hand pose, gaze, and head pose, visual analysis of the environment, and a multimodal action-recognition module for detecting user actions from multimodal information. generate a feature vector that includes one or more sensor values … and corresponding values associated with the extracted features or extracted patterns Sawhney para. [0030] teaches a data structure including a feature vector characterizing the multimodal synchronized state generated from multiple physical and human signals. input the feature vector to one or more models to determine whether the one or more workers correctly performed the task Sawhney para. [0040] teaches a machine-learning action-recognition model trained on correct and incorrect step performance and used in progress assessment/guidance logic to determine whether a detected user action matches the expected action for the current step. It would have been obvious to a person of ordinary skill in the art at the time of the invention to modify Grantcharov’s task monitoring system with Sawhney’s more specific multimodal action-recognition, feature-vector, and guidance teachings in order to improve automated recognition of worker gestures/actions, determine whether a worker correctly performed a task step, and provide more improved feedback. Both references are directed to monitoring users performing real-world processes in an environment using multiple sensor modalities and evaluating progress or performance, so the combination would have represented a predictable use of known multimodal analysis techniques to improve task-performance assessment. As per claim 2, Grantcharov teaches the system of claim 1, wherein the one or more sensors include an image sensor, a thermal sensor, a pressure sensor, a torque sensor, a temperature sensor, a radiation sensor, a proximity sensor, a position sensor, a flow sensor, a contact sensor, an acoustic sensor, a light sensor, a radar sensor, a millimeter wave sensor, an ultrasonic sensor, a touch sensor, an accelerometer, a humidity sensor, an infrared sensor, a light sensor, a color sensor, a gas sensor, a gyroscope, a hall sensor, a capacitive sensor, an analog sensor, a photoelectric sensor, a level sensor, a chemical sensor, an optical sensor, an active sensor, and/or a force sensor (Grantcharov; para. [0122] Example sensors 34 installed or utilized in a surgical unit, ICU, emergency unit or clinical intervention units include but not limited to: environmental sensors (e.g., temperature, moisture, humidity, etc., acoustic sensors (e.g., ambient noise, decibel), electrical sensors (e.g., hall, magnetic, current, mems, capacitive, resistance), flow sensors (e.g., air, fluid, gas) angle/positional/displacement sensors (e.g., gyroscopes, altitude indicator, piezoelectric, photoelectric), and other sensor types (e.g., strain, level sensors, load cells, motion, pressure).). As per claim 3, Grantcharov teaches the system of claim 1, wherein the processor is configured to receive the data associated with the one or more workers performing the task (Grantcharov; para. [0054], [0155] allows gathering of comprehensive information from every aspect of the individual, team and/or technology performances and their interaction during clinical interventions). As per claim 4, Grantcharov teaches the system of claim 3, wherein the processor is configured to pre-preprocess some or all of the data associated with the one or more workers performing the task (Grantcharov; para. [0293] the feeds may first require processing or pre-processing to extract feature sets.). As per claim 5, Grantcharov teaches the system of claim 4, wherein pre-processing some or all of the data associated with the one or more workers performing the task includes extracting one or more features or patterns (Grantcharov; para. [0293] the feeds may first require processing or pre-processing to extract feature sets.). As per claim 6, Grantcharov, teaches the system of claim 5, wherein the one or more extracted features or patterns include edges, shapes, textures, colors, gestures, and/or actions (Grantcharov; para. [0316] The perception engine 2000 may be configured to filter content, categorize, profile, extract features, uncover underlying data behaviors (i.e. gestures and/or actions) and provide evidence of correlation of events in complex multi-variable processes and timelines.). As per claim 7, Grantcharov teaches the system of claim 5, wherein the one or more extracted features or patterns and the some or all of the data associated with the one or more workers is inputted to the one or more models trained to determine whether the one or more workers correctly performed the task (Grantcharov; para. [0326] The perception engine 2000 may be tuned such that clinical events are flagged with a particular confidence level and/or confidence score. The confidence level and/or the confidence score may also be associated with a competence level or a competence score). As per the features of claims 8-13 -- 8. The system of claim 1, wherein the notification is provided after the task is completed. 9. The system of claim 1, wherein the notification is provided to a device associated with the one or more workers. 10. The system of claim 1, wherein the notification includes one or more comments indicating why the one or more workers incorrectly performed the task. 11. The system of claim 1, wherein the notification includes one or more recommendations indicating what the one or more workers can do to correctly perform the task. 12. The system of claim 1, wherein the notification includes information indicating how to correctly perform the task. 13. The system of claim 1, wherein the notification is provided after completion of a process that includes the task. The claim limitations of 8-13 are considered non-functional descriptive material and given little to no patentable weight. Descriptive material that cannot exhibit any functional interrelationship with the way in which computing processes are performed does not distinguish the claim over the prior art and given little to no patentable weight. The data that is being outputted in the notification and when the notification is outputted is considered non-functional descriptive material as does this not change the functional interrelationship with the way in which a processor outputs a notification. As per claim 14, Grantcharov teaches the system of claim 1, wherein the processor is configured to receive an indication to recalibrate the one or more sensors and/or the one or more models (Grantcharov; para. [0084] These predictions may be verified and/or compared against records of tracked incidents and/or events for accuracy, and the perception engine may be tuned (i.e. recalibrate) over a period of time based on the particular outputs desired, their accuracy, specificity, and sensitivity, among others.) As per claim 15, Grantcharov teaches the system of claim 14, wherein the processor is configured to recalibrate one or more of the one or more sensors and/or the one or more models in response to receiving the indication. (Grantcharov; para. [0084] These predictions may be verified and/or compared against records of tracked incidents and/or events for accuracy, and the perception engine may be tuned (i.e. recalibrate) over a period of time based on the particular outputs desired, their accuracy, specificity, and sensitivity, among others.) As per claim 16, Grantcharov teaches the system of claim 1, wherein the one or more models include one or more machine learning models trained using supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning (Grantcharov; para. [0300]). As per claim 17, Grantcharov teaches the system of claim 1, further comprising an exciter to enhance the data associated with the one or more workers performing the task (Grancharov; para.[0041]). As per claim 18, Grantcharov teaches the system of claim 1, wherein one or more items are affixed to an object from which the one or more sensors are monitoring (Grantcharov; para. [0054]). Claims 19 and 20 repeat substantially similar limitations as claim 1 and the reasons for rejection are incorporated herein. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Park (KR 102331335 B1) the closest foreign prior art of record teaches a vulnerable person care robot, which comprises a user data collection unit for collecting at least one user data of user appearance data, respiration data, movement data, temperature data, and fingerprint data; a user detection unit for detecting a health condition of a user using at least one of a respiration sensor, a motion sensor, an infrared temperature sensor, a sound sensor, and a radar sensor; an image photographing unit for photographing an image using a camera when satisfying a preset condition on the basis of data received by the user detection unit, and transmitting the photographed image to a data transceiving unit; a data transceiving unit for receiving the data from the user data collection unit or the user detection unit, and transmitting image data photographed by the image photographing unit to an emergency contact network; an AI communication unit for communicating with the user on the basis of a pre-stored inquiry and answer list, and leveling the user in accordance with a level system. Salvador (Salvador, R. A. A., & Naval Jr., P. C. (2022). Towards a Feasible Hand Gesture Recognition System as Sterile Non-contact Interface in the Operating Room with 3D Convolutional Neural Network. Informatica (03505596), 46(1), 1–12. https://doi.org/10.31449/inf.v46i1.3442) the closest non-patent literature of record teaches a Deep Computer Vision-based Hand Gesture Recognition framework to facilitate a touchless interaction. Salvador teaches training a 3D Convolutional Neural Network with a very large scale dataset to classify hand gestures robustly Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH GIANG MICHELLE LE whose telephone number is (571)272-8207. The examiner can normally be reached Mon- Fri 8:30am - 5:30pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON DUNHAM can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LINH GIANG "MICHELLE" LE PRIMARY EXAMINER Art Unit 3686 /LINH GIANG LE/Primary Examiner, Art Unit 3686 3/19/2026
Read full office action

Prosecution Timeline

Feb 19, 2025
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597522
METHOD AND SYSTEM FOR MANAGING PRESSURE ULCERS AND COMPUTING DEVICE FOR EXECUTING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12580066
ARTIFICIAL INTELLIGENCE SYSTEM ON AN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573501
SMART-PORT MULTIFUNCTIONAL READER/IDENTIFIER IN A PRODUCT STERILIZATION CYCLE
2y 5m to grant Granted Mar 10, 2026
Patent 12567484
AUTOMATIC MEDICAL DEVICE PATIENT REGISTRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12548650
METHODS AND SYSTEMS FOR PERFORMING DOSE TITRATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
61%
With Interview (-5.2%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 675 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month