Prosecution Insights
Last updated: April 19, 2026
Application No. 18/133,619

AUTOMATED BEHAVIOR MONITORING AND MODIFICATION SYSTEM

Non-Final OA §102§103
Filed
Apr 12, 2023
Examiner
TRAN, LARA LINH
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Mindfulgarden Digital Health Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-70.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
35 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
26.4%
-13.6% vs TC avg
§112
27.1%
-12.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: Regarding claim 4, “and” in “and increase in movement of visual content presented to the patient” should be changed to “an”. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-12, 14, 15 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zayfert et al. (US 20100010371 A1). Regarding claim 1, Zayfert et al. teaches a system for providing automated behavior monitoring and modification in a patient (“the program will enable clinical decisions to be guided by an array of parameters indicative of patients’ mental state, including physiological and behavioral measures”, paragraph [0112]), the system comprising: An audio/visual device configured to present audible and/or visual content (“I/O devices 16…provide a video screen for displaying a graphical environment and a speaker for delivering sound for communicating with the patient 65”, paragraph [0039]) to a patient exhibiting one or more disruptive behaviors associated with a mental state (“exposure therapy…varied in correspondence to the monitored mental state of the patient”, paragraph [0009]) One or more sensors configured to continuously capture patient activity data (“the exemplary embodiment may also use a variety of sensors to track the current health and/or mental status of the patient”, paragraph [0039]) during presentation of an audible and/or visual content, the patient activity data comprising at least one of patient motion (“monitoring…restlessness, and shifting in chair”, paragraph [0015]), vocalization (“monitoring of the speech of the patient”, paragraph [0056]), and physiological readings (“monitoring…at least one biological or physiological characteristic”, paragraph [0015]) A computing system operably associated with the audio/visual device and configured to control output of the audible and/or visual content therefrom based, at least in part, on the patient activity data (“any computer system…computer-implemented techniques may be employed for generating control signals to any of various stimuli generators that may be employed at the patient communication module 110, or for receiving feedback from any of various input devices that may be employed at the patient monitoring module 120”, paragraph [0104]; software 100, Fig. 2), wherein the computing system is configured to: Receive and analyze, in real time, patient activity data from the one or more sensors and determine a level of increase or decrease in patient activity over a period of time (“the patient monitoring module 120 integrate, compare, contrast, reconcile, or otherwise reflect the results of monitoring from the various mans employed before generating a mental state metric indicative of the mental state of the patient 65”, paragraph [0060]; patient monitoring module 120, Fig. 2) Dynamically adjust a level of output of the audible and/or visual content from the audio/visual device to correspond to the determined level of increase or decrease in patient activity (“navigation module 140 to adjust the intensity with which a stimulus 156 is played back to the patient 65…playback intensity as used herein refers to any parameter affecting playback that can be varied so as to increase or decrease the psychological impact…examples of such parameters include audio volume, video contrast, color saturation, frequency”, paragraph [0091]; navigation module 140, Fig. 2) Regarding claims 2 and claims 5, Zayfert et al. teaches a(n) increase/decrease in patient activity comprising at least one of increased/decreased patient motion, vocalization, and levels of physiological readings respectively (“playback that can be varied so as to increase or decrease the psychological impact”, paragraph [0091]). The patient activity would increase if the detected patient motion, vocalization, or levels of physiological readings increased. Regarding claims 3 and 6, Zayfert et al. teaches the computing system being configured to increase/decrease a level of output of audible and/or visual content to correspond to a(n) increase/decrease in patient activity respectively (“the intensity module 144 in one embodiment might cause playback intensity for a stimulus to be set to increasingly higher levels until it is determined that the patient 65 has habituated sufficiently to that stimulus”, paragraph [0064], “intensity module 144 optionally adjusts playback intensity to what is expected to be a suitable level based on patient mental state”, paragraph [0097]; intensity module 144, Fig. 2). Regarding claims 4 and claims 7, Zayfert et al. teaches a(n) increased/decreased level of output of audible and/or visual content comprising at least one of: a(n) increase/decrease in an amount of visual content presented to the patient; a(n) increase/decrease in a type of visual content presented to the patient (“the computer may provide or direct the visual and the audio exposure responsive to the level of sensed anxiety or distress”, paragraph [0010]); and a(n) increase/decrease in frequency and/or tone of audible content presented to the patient (“playback intensity as used herein refers to any parameter affecting playback that can be varied so as to increase or decrease the psychological impact, particularly the level of anxiety or distress…examples of such parameters…frequency response or range, monaural versus stereo”, paragraph [0091]). Regarding claim 8, Zayfert et al. teaches the computing system being configured to dynamically adjust levels of output of the audible and/or visual content based on adjustable predefined ratios applied to patient activity data (“stimuli are being indexed by mental state metric and organized into a hierarchy, this initial value of the mental state metric for each stimulus can serve as a baseline”, paragraph [0072], “a therapist 55 might determine that a patient 65 had habituated when the mental state metric values stored in the form of a monitoring history 158 indicate a consistent trend indicative of steadily decreasing anxiety”, paragraph [0096]). Regarding claim 9, Zayfert et al. teaches the patient motion comprising facial expressions, (“monitoring of the facial expressions of the patient 65 by the audiovisual monitoring module 124”, paragraph [0057]; audiovisual monitoring module 124, Fig. 2) physical movement, and/or physical gestures (“patient might wear an actigraph or actimetry sensor…for measuring motion”, paragraph [0114]). Regarding claim 10, Zayfert et al. teaches the physiological readings comprising at least one of the patient’s: body temperature, heart rate, heart rate variability, blood pressure, respiratory rate, respirator depth, and skin conductance (“the at least one biological or physiological characteristic may include at least one species chosen from among the group consisting of breathing, heart rate, blood pressure, peripheral resistance, skin temperature, skin conductance”, paragraph [0015]). Regarding claim 11, Zayfert et al. teaches the one or more disruptive behaviors comprising carrying levels of distress associated with the mental state (“monitoring by the physiologic module 126 in the context of embodiments of the present invention are phenomena indicative of nervousness, stress, anxiety, distress, or similar emotional state”, paragraph [0058]; physiologic module 126, Fig. 2). Regarding claim 12, Zayfert et al. teaches the disruptive behaviors being associated with any mental state or mental disorder, such as delirium. (“person to whom treatment is administered by the device…persons having any of a variety of types of emotional distress, including, without limitation, anxiety disorders…or stress-related problems or other psychosocial problems or conditions”, paragraph [0028]). Regarding claim 14, Zayfert et al. teaches the one or more sensors comprising one or more cameras (“audiovisual monitoring module 124…include a webcam setup or other such equipment for monitoring the speech and/or facial expressions”, paragraph [0055]), one or more motion sensors (), one or more microphones (“microphone could be employed to record the oral description of the patient”, paragraph [0076]), and/or one or more biometric sensors (“the I/O devices 16 may include input devices…biosensor”, paragraph [0037]; audiovisual monitoring module 124, Fig. 2). Regarding claim 15, Zayfert et al. teaches the audible and/or visual content presented to the patient comprising sounds and/or images. (“the patient might be exposed to stimuli associated with the traumatic event, such as, but not limited to, objects, clothing, persons, smells, sounds, pictures, or locations that elicit emotional distress due to their association with the traumatic event”, paragraph [0004]). Regarding claim 18, Zayfert et al. teaches the content in the images being synchronized to the time of day in which the images are presented to the patient (“indexing module 134…for associating with a stimulus the mental state metric generated by the patient monitoring module 130 at the time that the stimulus is being experienced…history of times at which the stimuli were recorded”, paragraph [0063]; indexing module 134, patient monitoring module 130, Fig. 2). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 13 is rejected under 35 U.S.C. 103 as being obvious over Zayfert et al. in view of Palmer (US 20110091544 A1). Regarding claim 13, Zayfert et al. teaches all the limitations of claim 12, but does not teach associating the levels of distress with the Richmond Agitation Sedation Score and/or Delirium Score. However, Palmer teaches a system that utilizes measuring the varying levels of agitation and distress with the Richmond Agitation Sedation Score (“using the validated, objective Richmond Agitation-Sedation Scale (RASS; +4 for highly agitated to -5 for unarousable”, paragraph [0178], “RASS values once the procedure began, indicating patient anxiety and restlessness”, paragraph [0221]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Zayfert et al. in view of Palmer and utilize the Richmond Agitation Sedation Score in order to follow an objective baseline of how to rate a patient’s levels of agitation and/or distress when determining the level of stimulus for treatment. Claim 16 is rejected under 35 U.S.C. 103 as being obvious over Zayfert et al. in view of Divine et al. (US 20180247024 A1). Regarding claim 16, Zayfert et al. teaches all the limitations of claim 15, but does not teach comprising two-dimensional (2D) and three-dimensional (3D) video-layered animations. However, Divine et al. teaches a system identifying an area, gathering two-dimensional (2D) images and video-layering with three-dimensional (3D) animations (“overlay data can be displayed in a manner such that it appears spatially situated (e.g., in two-dimensional (2D) or three-dimensional (3D) space)”, paragraph [0035], “overlay data can include but is not limited to: text, a symbol, a marker, a tag, an image, a video, an animation”, paragraph [0034]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Zayfert et al. with the system of Divine et al. in order to layer 3D animations over real-world videos and images (“AR device can further render the overlay data via the display of the AR device over the live view (e.g., either real or video/image) of the environment also presented on or viewed through the display”, paragraph [0034]) in order to match with a patient’s mental and physical state, or needs for treatment (“the system can generate overlay data that indicates the incorrect placement of the IV needle…relative to the patient’s IV site”, paragraph [0035]). Claims 17 and 19 are rejected under 35 U.S.C. 103 as being obvious over Zayfert et al. in view of Tzvieli et al. (US 20170367651 A1). Regarding claim 17, Zayfert et al. teaches all the limitations of claim 15, but does not teach images comprising nature-based imagery. However, Tzvieli et al. teaches a device that measures the levels of stress in a patient and providing images for a physiological response (“help determine the amount of stress a person is feeling”, paragraph [0003], “user is presented a visual cue”, paragraph [0351]), the images comprising nature-based imagery (“in response to detecting an elevated stress level, the computer may display calming video (e.g., tranquil nature scenes)”, paragraph [0574]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Zayfert et al. with the system of Tzvieli et al. and provide nature-based imagery as one of the visual content displayed to the patient for treatment, as it can reduce levels of distress. Regarding claim 19, Zayfert et al. teaches all the limitations of claim 15, but does not teach the sounds being noise-cancelling and/or noise-masking. However, Tzvieli et al. teaches the sounds being noise-cancelling (“there are various methods that may be used to process the raw values…noise cancellation”, paragraph [1187]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Zayfert et al. with the system of Tzvieli et al. in order to configure the sounds to be noise-cancelling, selecting sounds at a frequency that enhances calming and anxiety-reducing effects. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARA LINH TRAN whose telephone number is (571)272-3598. The examiner can normally be reached 7:30am-5:00pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached at 5712724233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.L.T./Examiner, Art Unit 3791 /ALEX M VALVIS/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month