Prosecution Insights
Last updated: April 19, 2026
Application No. 18/001,474

SYSTEM AND METHOD FOR TREATING POST TRAUMATIC STRESS DISORDER (PTSD) AND PHOBIAS

Non-Final OA §102§103§112
Filed
Dec 12, 2022
Examiner
PARK, EVELYN GRACE
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Waji LLC
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
45 granted / 80 resolved
-13.7% vs TC avg
Strong +47% interview lift
Without
With
+46.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
31.7%
-8.3% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 80 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 is objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim cannot depend from any other multiple dependent claim. See MPEP § 608.01(n). Accordingly, the claim 10 has not been further treated on the merits. Claim 14 is objected to because of the following informality: “return a results” should read either “return results” or “return a result”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 5-18, and 23-33 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “each treatment session” in line 10. It is unclear based on the “each” language whether the system operated during a singular treatment, or during multiple treatments. Line 1 of claim 1 describes “a system for guiding a user during a treatment session for a mental health disorder”, which indicates a singular treatment, however, “each treatment session” implies there are multiple session. Further clarification is required. Additionally, line 3 of claim 1 recites “a screen”, while line 11 recites “a display”. It is unclear if the screen and the display are meant to be the same element, or different elements of the user computation device. For the purpose of examination, the screen and the display may be the same element, or different elements. Claim 10 is further rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites “instructions for analyzing eye movements of the user according to one or more deep learning and/or machine learning algorithms” in lines 2-3. It is unclear if these eye movements are meant to refer to the same “eye movements” recited in claim 1, of if these are different eye movements. For the purpose of examination, these may be the same eye movements or different eye movements as those described in claim 1. Claim 12 is further rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites the term “RNN”, which is not defined in the claims or specification. For the purpose of examination, “RNN” is interpreted to be a Recurrent Neural Network, however, an amendment to the specification or claims is needed to define the “RNN” abbreviation in claim 12. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 5-6, 8-9, 13-16, 18, 23-25, and 27-33 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20200086077 A1 (Gazit et al.). Regarding claim 1, Gazit teaches a system for guiding a user during a treatment session for a mental health disorder ([0027] “a computer-based method for remote Eye Movement Desensitization and Reprocessing (EMDR) therapy”), comprising a user computational device, the user computational device comprising a camera ([0039] “camera”), a screen ([0039] “touch screen”), a processor, a memory and a user interface ([0015] “a client computing platform, comprising a client computer screen, a processor and associated memory storing instructions”), wherein said user interface is executed by said processor according to instructions stored in said memory ([0015]), wherein eye movements of the user are tracked during the treatment session, and wherein the treatment session comprises a plurality of stages determined according to interactions of the user with said user interface and according to said tracked eye movements ([0020] “a level of correspondence between the client eye movement and the movement of the visual element on the client screen. An image processing algorithm may execute on the client or therapist computers to identify eye motion of the client and to correlate that eye motion with eye movement of the BLS visual element to generate the level of correspondence”); wherein a timing, frequency and length of the treatment session is determined by the user through said user computational device ([0073] “number of sessions, time and duration of the last session”), such that the user controls each treatment session ([0053]); wherein said user computational device further comprises a display for displaying information to the user ([0051] “to display the visual element at the new position or to emit the sound and/or vibration on the side of the client”), and wherein said memory further stores instructions for performing eye tracking and instructions for providing an eye stimulus by being displayed on said display, and wherein said processor executes said instructions for providing said eye stimulus such that said eye stimulus is displayed on said display to the user, and for tracking an eye of said user ([0048] “the client and therapist applications present visual, audio, and/or tactile elements during bilateral stimulations, such as presenting a light moving back and forth across the screen”; [0093] “the therapist can monitor eye movement of the client while the client is following the moving visual element during the bilateral stimulation.”); wherein said instructions further comprise instructions for adjusting said eye stimulus according to said eye tracking ([0053] “Parameters are typically set before a set of bilateral stimulations begins, but they can also be changed during a stimulation activity when appropriate”); wherein said instructions further comprise instructions for moving said eye stimulus from left to right, and from right to left, according to a predetermined speed and for a predetermined period ([0061] “The movement of the visual element may be set to scan, meaning containing the whole movement path from left to right and back (scanning), or to saccades, meaning jumping directly from side to size.”; [0068] “The duration of a set of bilateral stimulations may be predefined according to the needs of the client during therapy, for example for long or short processing, etc.”); wherein said instructions further comprise instructions for determining said predetermined period according to one or more of a physiological reaction of the user ([0095]), tracking said eye of the user and an input request of the user through said user interface ([0053] “sometimes there is a need to change an EMDR set, that is, to change the predefined set of bilateral stimulations, according to client needs while a BLS activity is being conducted.”; [0072] “the dashboard may include options for saving client preferences for BLS parameters … a set duration (i.e., the duration of a set of activities)”; [0086] “As described below, the client application may also make several modifications to the appearance of the client screen when a bilateral stimulation is initiated.”). Regarding claim 5, Gazit teaches the system of claim 4, wherein said predetermined period comprises a plurality of repetitions of movements of said eye stimulus from left to right, and from right to left ([0018]; [0061] “The movement of the visual element may be set to scan, meaning containing the whole movement path from left to right and back (scanning)”). Regarding claim 6, Gazit teaches the system of claim 5, wherein said instructions further comprise instructions for determining a degree of attentiveness of the user according to a biometric signal, and for adjusting moving said eye stimulus according to said degree of attentiveness ([0095] “The image processing tracks the eye movement and determines a level of correspondence between the eye movement and with the movement of the visual element on the client screen. The level of correspondence can be indicated to the therapist, who can then determine if parameters of the bilateral stimulation should be changed accordingly, such as changing the speed, type, or range of movement”). Regarding claim 8, Gazit teaches the system of claims 6 or 7, wherein said biometric signal comprises eye gaze, wherein said user computational device tracks eye gaze through said camera ([0095] “the client or therapist application may include performing image processing of a real time video recording of the client to identify eye movements of the client. Typically, the real time video recording is the video recording performed for the video chat between the client and the therapist. The image processing tracks the eye movement and determines a level of correspondence between the eye movement and with the movement of the visual element on the client screen.”). Regarding claim 9, Gazit teaches the system of claim 8, wherein said instructions further comprise instructions for determining whether said eye gaze tracking comprises high accuracy eye gaze tracking or low accuracy eye gaze tracking ([0095] “The image processing tracks the eye movement and determines a level of correspondence between the eye movement and with the movement of the visual element on the client screen”), and wherein said degree of attentiveness is calculated according to whether said eye gaze tracking comprises high accuracy eye gaze tracking or low accuracy eye gaze tracking, such that if said eye gaze tracking comprises high accuracy eye gaze tracking, said degree of attentiveness is determined according to a high degree of simultaneous overlap of locations of said eye gaze with locations of said eye stimulus; and alternatively wherein a lower extent of overlap of said locations is considered to calculate said degree of attentiveness ([0095] “the EMDR server may be configured to incrementally increase the speed of the visual element while the correspondence is better than 95%, and to decrease the speed when the correspondence is less than 90%”). Regarding claim 13, Gazit teaches the system of claim 1, wherein the user computational device further comprises a user input device, wherein the user interacts with said user interface through said user input device to perform said interactions with said user interface during the treatment session ([0049] “During bilateral stimulation, visual, audio, and tactile elements are presented on the client platform 24. Visual elements are typically displayed on the client platform screen. Audio elements are output from the client platform speakers (typically placed on either side of the client) or headphones, and may include music or sounds, which alternate between the speakers. Tactile elements are output from devices that are meant to be touched by the client”). Regarding claim 14, Gazit teaches the system of claim 1, further comprising a cloud computing platform ([0041] “the EMDR server may be a cloud-hosted server that operates, for example, on a cloud service such as Amazon Web Services Elastic Computing Cloud”), comprising a virtual machine ([0041] “any type of computing device”), comprising a processor and a memory for storing a plurality of instructions ([0038] “the EMDR server are computing platforms that typically includes one or more processors (i.e., CPUs), typically provided as computing chips or as any suitable computing device, and including memory and a storage system”); and a computer network ([0039] “The computing platforms also include wired or wireless network interface cards to facilitate remote communications.”), wherein said user computational device communicates with said cloud computing platform through said computer network ([0040] “the therapist and client platforms may be any type of interactive computing devices, such as mobile devices. Both platforms also typically include an internet browser, such as Google Chrome, as the means of interfacing both with the EMDR server 26 and with each other.”); wherein said processor of said virtual machine executes said instructions on said memory to analyze at least biometric information of the user during a treatment session, and to return a results of said analysis to said user computational device for determining a course of said treatment session ([0020-0022] “An image processing algorithm may execute on the client or therapist computers to identify eye motion of the client and to correlate that eye motion with eye movement of the BLS visual element to generate the level of correspondence. The EMDR server may be further configured to automatically modify at least one parameter controlling the BLS according to a level of correspondence between the client eye movement and the movement of the visual element on the client screen.”); wherein said biometric information is transmitted from a biometric measuring device directly to said cloud computing platform or alternatively is transmitted from said biometric measuring device to said user computational device, and from said user computational device to said cloud computing platform ([0047] “applications run on the therapist and client platforms as browser-based applications, i.e., “client-side code,” which is loaded to the browsers when the browsers are directed to a webpage of the EMDR web server.”). Regarding claim 15, Gazit teaches the system of claim 14, wherein said virtual machine analyses said biometric information from said biometric measuring device without input from said user computational device ([0095] “The EMDR server may also be configured to automatically modify the bilateral stimulation according to the level of correspondence”). Regarding claim 16, Gazit teaches the system of claim 14, wherein said virtual machine analyses said biometric information from said biometric measuring device in combination with input from said user computational device ([0053] “change the predefined set of bilateral stimulations, according to client needs while a BLS activity is being conducted.”). Regarding claim 24, Gazit teaches the system of claim 1, wherein the mental health disorder comprises PTSD (post traumatic stress disorder), a phobia or a disorder featuring aspects of PTSD and/or a phobia ([0003] “Eye Movement Desensitization and Reprocessing (EMDR) is a method of psychological therapy for the treatment of a range of psychological disorders and mental health problems, such as Posttraumatic Stress Disorder (PTSD), trauma, anxiety, chronic pain, and somatic symptoms”). Regarding claim 25, Gazit teaches a method of treatment of a mental health disorder, comprising operating the system of claim 1 by a user, and adjusting said plurality of stages in the treatment session to treat the mental health disorder ([0068] “The duration of a set of bilateral stimulations may be predefined according to the needs of the client during therapy, for example for long or short processing, etc.”; [0003]). Regarding claim 26, Gazit teaches the method of claim 25, comprising a plurality of treatment stages to be performed in the treatment session, said treatment stages comprising a plurality of eye movements from left to right and from right to left as performed by the user, according to a plurality of movements of an eye stimulus from left to right and from left to right ([0061-0063] “The movement of the visual element may be set to scan, meaning containing the whole movement path from left to right and back (scanning)”); wherein an attentiveness of the user at each stage to said movements of said eye stimulus and wherein a subsequent stage is not started until sufficient attentiveness of the user to a current stage is shown ([0086] “The streaming of broadcast messages may continue as long as, at a step 216, a preset definition of the activity, or of a set of multiple activities, as defined by the therapist, has not been completed”; [0088] “Alternatively, the system could be configured to display any type of image or icon. At the bottom of the therapist screen there are dashboard controls 310, which, as described above, can be used to change aspects of the visual element, such as its color, shape, and speed, while also starting or stopping the bilateral stimulation and controlling use of the audio and/or tactile output. An additional indicator that may appear on the therapist screen is an indicator that the client platform has made a data connection (such as a WebSocket connection) to the EMDR application. A similar indicator may also be configured to appear on the client screen.”; [0095]). Regarding claim 27, Gazit teaches the method of claim 26, wherein said treatment stages comprise at least Activation, wherein the user performs eye movements while considering a traumatic event ([0006] “During the assessment, a memory and its different components are identified, including the symptoms that have been caused by the memory and related aspects, including an associated image, negative thoughts associated with the memory, where it is located in the body, descriptions of related emotions, etc.”); Externalization wherein the user performs eye movements while imagining themselves as a character outside of said traumatic event ([0008] “installing new adaptive information by having the client concentrate on a desired positive belief, with periodic sets of bilateral stimulation”); and Deactivation, wherein the user performs eye movements while imagining such an event as non-traumatic ([0005] “Preparation activities prepare the client to feel in control during memory processing. This phase involves “safe place” techniques, during which the client is asked to imagine a safe place, while he is engaged in a bilateral stimulation (BLS)”). Regarding claim 28, Gazit teaches the method of claim 27, wherein said treatment stages further comprise Reorientation, wherein the user performs eye movements while re-imagining the event ([0010] “reevaluation phase, typically done at the next session, when the client is asked to bring back the memory and see how it feels. BLS may be performed during this phase, as well.”). Regarding claim 29, Gazit teaches the system of claim 1, further comprising a cloud computing platform for storing a plurality of scripts ([0041]); and a computer network ([0046] “web browsers and mobile applications with real-time audio and video communication.”), wherein said user computational device communicates with said cloud computing platform through said computer network ([0047] “applications run on the therapist and client platforms as browser-based applications, i.e., “client-side code,” which is loaded to the browsers when the browsers are directed to a webpage of the EMDR web server.”); wherein upon initiation of the treatment session, a script is accessed from said cloud computing platform by said user computational device ([0051] “the EMDR application may be configured to simultaneously stream, to both the client and therapist applications”); wherein said script is parsed into a plurality of frames, wherein each frame represents a graphical user interface (GUI) display for said user interface and wherein each frame is displayed through said display of said user computational device ([0050-0053] “client and therapist applications present the visual, audio, and tactile elements at positions, and with features, that may be determined by EMDR messages delivered (i.e., “pushed”) synchronously to the client and therapist applications from the EMDR application over respective communications links 30 and 32.”). Regarding claim 30, Gazit teaches the system of claim 29, wherein one or more user commands for adjusting said script are provided through said user interface, and wherein said script is adjusted according to said one or more user commands ([0053] “A “dashboard” may be implemented as part of the same therapist application that presents the stimulation elements and the video chat, enabling the therapist to control parameters of bilateral stimulations”). Regarding claim 31, Gazit teaches the system of claim 1, further comprising a cloud computing platform ([0041] “the EMDR server may be a cloud-hosted server that operates, for example, on a cloud service such as Amazon Web Services Elastic Computing Cloud”), comprising a virtual machine ([0041] “any type of computing device”), comprising a processor and a memory for storing a plurality of instructions ([0038] “the EMDR server are computing platforms that typically includes one or more processors (i.e., CPUs), typically provided as computing chips or as any suitable computing device, and including memory and a storage system”); and a computer network ([0039]; [0046] “web browsers and mobile applications with real-time audio and video communication.”), wherein said user computational device communicates with said cloud computing platform through said computer network ([0047] “applications run on the therapist and client platforms as browser-based applications, i.e., “client-side code,” which is loaded to the browsers when the browsers are directed to a webpage of the EMDR web server.”); wherein said processor of said virtual machine executes said instructions on said memory for dynamic treatment generation configuration ([0082]), for dynamically adjusting the treatment session according to an analysis of user interactions during the treatment session ([0095] “The EMDR server may also be configured to automatically modify the bilateral stimulation according to the level of correspondence”; [0084]). Regarding claim 32, Gazit teaches the system of claim 31, wherein said analysis of user interactions comprises receiving user feedback and adjusting the treatment session accordingly ([0053] “Parameters are typically set before a set of bilateral stimulations begins, but they can also be changed during a stimulation activity when appropriate. That is, sometimes there is a need to change an EMDR set, that is, to change the predefined set of bilateral stimulations, according to client needs while a BLS activity is being conducted.”). Regarding claim 33, Gazit teaches the system of claim 31, wherein said cloud computing platform further comprises a therapy session engine for receiving real time session data and for adjusting the treatment session accordingly ([0041] “real-time audio and video communication.”; [0071-0072]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, 10-12, 17-18, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200086077 A1 (Gazit et al.) in view of US 20190385711 A1 (Shriberg et al.). Regarding claim 7, Gazit teaches the system of claim 6. Gazit does not teach wherein said biometric signal comprises heart rate, the system further comprising a heart rate monitor, wherein said heart rate monitor transmits heart rate information to said user computational device. However, Shriberg teaches wherein said biometric signal comprises heart rate, the system further comprising a heart rate monitor, wherein said heart rate monitor transmits heart rate information to said user computational device ([0161] “A client device may collect additional data, such as biometric data. For example, smart watches and fitness trackers already have the capability of measuring motion, heart rate and sometimes respiratory rate and blood oxygenation levels and other physiologic parameters.”; [0362]; [0546] “patient data for mental illness diagnostics may be extracted from one or more of the patient's biometrics including heart rate”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Gazit to include measuring heart rate information. One would have been motivated to make this modification because changes in heart rate can aid in mental health diagnostic determinations, as suggested by Shriberg [0362, 0546]. Regarding claim 10, Gazit teaches the system of any of the above claims. Gazit does not teach wherein said instructions further comprise instructions for analyzing eye movements of the user according to one or more deep learning and/or machine learning algorithms. However, Shriberg teaches wherein said instructions further comprise instructions for analyzing eye movements of the user according to one or more deep learning and/or machine learning algorithms ([0154] “machine learning algorithms”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Gazit to include analysis using a machine learning algorithm. One would have been motivated to make this modification because machine learning can be used to assess the mental state of a patient over multiple sessions and evaluate audiovisual signals of the patient, as suggested by Shriberg [0154, 0186, 0188]. Regarding claim 11, Gazit teaches the system of claim 10. Gazit does not teach wherein said instructions further comprise instructions for analyzing biometric information from the user according to one or more deep learning and/or machine learning algorithms. However, Shriberg teaches wherein said instructions further comprise instructions for analyzing biometric information from the user according to one or more deep learning and/or machine learning algorithms ([0154] “machine learning algorithms”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Gazit to include analysis using a machine learning algorithm. One would have been motivated to make this modification because machine learning can be used to assess the mental state of a patient over multiple sessions and evaluate audiovisual signals of the patient, as suggested by Shriberg [0154, 0186, 0188]. Regarding claim 12, Gazit teaches the system of claim 11. Gazit does not teach wherein said one or more deep learning and/or machine learning algorithms comprise an algorithm selected from the group consisting of a DBN, a CNN and an RNN. However, Shriberg teaches wherein said one or more deep learning and/or machine learning algorithms comprise an algorithm selected from the group consisting of a DBN, a CNN and an RNN ([0290] “CNNs”; [0338]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Gazit to include analysis using a CNN, RNN, and/or DBN machine learning algorithm. One would have been motivated to make this modification because machine learning can be used to assess the mental state of a patient over multiple sessions and evaluate audiovisual signals of the patient and algorithms such as CNNs can be used for image processing in order to model gaze tracking, as suggested by Shriberg [0290]. Regarding claim 17, Gazit teaches the system of claim 16. Gazit does not teach wherein said biometric signal comprises heart rate, the system further comprising a heart rate monitor, wherein said cloud computing platform receives heart rate information directly or indirectly from said heart rate monitor. However, Shriberg teaches wherein said biometric signal comprises heart rate, the system further comprising a heart rate monitor, wherein said cloud computing platform receives heart rate information directly or indirectly from said heart rate monitor ([0161] “A client device may collect additional data, such as biometric data. For example, smart watches and fitness trackers already have the capability of measuring motion, heart rate and sometimes respiratory rate and blood oxygenation levels and other physiologic parameters.”; [0362]; [0546] “patient data for mental illness diagnostics may be extracted from one or more of the patient's biometrics including heart rate”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Gazit to include measuring heart rate information. One would have been motivated to make this modification because changes in heart rate can aid in mental health diagnostic determinations, as suggested by Shriberg [0362, 0546]. Regarding claim 18, Gazit teaches the system of claim 17, wherein said biometric signal comprises eye gaze, wherein said user computational device obtains eye gaze information from said camera, and wherein said cloud computing platform receives said eye gaze information from said user computational device ([0095] “the client or therapist application may include performing image processing of a real time video recording of the client to identify eye movements of the client. Typically, the real time video recording is the video recording performed for the video chat between the client and the therapist. The image processing tracks the eye movement and determines a level of correspondence between the eye movement and with the movement of the visual element on the client screen.”; [0039-0041]). Regarding claim 23, Gazit teaches the system of claim 18, wherein said instructions of said virtual machine comprise instructions for determining a determining a degree of attentiveness of the user according to said tracking of eye movements ([0095] “The image processing tracks the eye movement and determines a level of correspondence between the eye movement and with the movement of the visual element on the client screen. The level of correspondence can be indicated to the therapist, who can then determine if parameters of the bilateral stimulation should be changed accordingly, such as changing the speed, type, or range of movement”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVELYN GRACE PARK whose telephone number is (571)272-0651. The examiner can normally be reached Monday - Friday, 9AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert (Tse) Chen can be reached at (571)272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EVELYN GRACE PARK/Examiner, Art Unit 3791 /TSE W CHEN/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Dec 12, 2022
Application Filed
Oct 18, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594006
SMARTPHONE APPLICATION WITH POP-OPEN SOUNDWAVE GUIDE FOR DIAGNOSING OTITIS MEDIA IN A TELEMEDICINE ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12588835
METHOD AND SYSTEM FOR TRACKING MOVEMENT OF A PERSON WITH WEARABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12569147
FLUID RESPONSIVENESS DETECTION DEVICE AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12564390
A BIOPSY ARRANGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12557991
TEMPERATURE MEASUREMENT DEVICE AND SYSTEM FOR DETERMINING A DEEP INTERNAL TEMPERATURE OF A HUMAN BEING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+46.9%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 80 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month