Prosecution Insights
Last updated: April 19, 2026
Application No. 18/201,062

Systems and Methods for Detecting Impairment Based Upon Movement Data

Final Rejection §102§103
Filed
May 23, 2023
Examiner
PARK, EVELYN GRACE
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
BI Incorporated
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
45 granted / 80 resolved
-13.7% vs TC avg
Strong +47% interview lift
Without
With
+46.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
33 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
31.7%
-8.3% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 80 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed November 21, 2025 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments to the claims have overcome each and every 112 rejection, 102 rejection, and 103 rejection previously set forth in the Non-Final Office Action mailed September 3, 2025. Applicant’s amendments to the claims necessitate new grounds of rejection, as described in the Response to Arguments and 102/103 Rejections below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-12, and 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 20220218253 A1 (Seidenspinner, Don P.). Regarding claim 1, Seidenspinner teaches a system for detecting impairment based upon movement, the system comprising: a movement sensor configured to receive movement information about a detached monitor device ([0002] “The device for this impairment evaluation may be a handheld unit, e.g., a smart phone, suitably programmed to provide the delivery of a stimulus, a record of the response to the stimulus, and perform at least a portion of the analysis of the response for determination of possible impairment of the measured individual.”; [0096] “movement detectors, e.g., accelerometers, may be positioned on the body region or incorporated into the stimulus device, e.g., a smartphone, and utilized for the purpose of detection of muscle movement. In still other instances, other methods, e.g., audio recordings or infrared movement detectors, may be employed to detect motion or other involuntary responses.”); a visual display ([0143] “display”); one or more processors ([0054]; [0056] “one or more devices or computational systems performing the analysis that determines various aspects of the response”); and a non-transient computer readable medium coupled to the one or more processors, and having stored therein instructions ([0056]; [0067]) which when executed by the one or more processors, causes the one or more processors to: cause a disorienting video stream to play on the visual display for an individual to watch ([0083] “When light stimulation is employed, one form of stimulus is that of a bright momentary flash of visible white light encompassing multiple frequencies of light. Alternate embodiments may employ a strobing light with a set number of flashes whose timing and numbers of flashes is preset or responsive to measured parameters. These light bursts are preferably emitted by device of the invention such as device 110 or by a component in direct communication with or affixed to device 110.”; [0143] “ability to record from either face of the phone, i.e., the front having a display and the back, as well as the ability to deliver a light stimulus, i.e., from the front display or back facing camera flash feature, a user may hold the phone in such a manner to both enter commands through the touch pad display, and in so-called “selfie” mode, hold the camera to enable the stimulus to be delivered and a video record obtained”); receive the movement information from the movement sensor ([0071] “[0071] 1. Stimulate a subject to invoke an involuntary muscle movement [0072] 2. Record the response to this stimulus both prior to, during, and following the stimulation [0073] 3. Analyze the data from this response, e.g., magnitude and timing of response, to arrive at one or metrics indicative of the response”); apply a movement impairment model to the movement information to yield a probability that the individual watching the disorienting video stream is impaired ([0120] “Determination of how the timing or magnitude of the response of the data values associated with the response compares to normative or impaired values is utilized in various embodiments of the invention to assess the likelihood and degree of possible impairment. One means of accomplishing this objective is the mathematical comparison between the observed response value(s) of one or more parameters or sections of a response versus those same response parameters in comparative data sets that preferably includes both normative and impaired values. In certain instances, mathematical transforms of the data sets, e.g., parameters, equations, or algorithms defining the data sets or values and the variance associated with these may be employed.”; [0124]); indicate a likelihood of impairment based at least in part on a determination that the probability exceeds a first threshold ([0133] “whether such impairment exceeds a pre-determined level or threshold. The scoring may be directly associated with a threshold value or indirectly as a probability.”); and indicate no impairment when the probability is less than a second threshold ([0131-0132] “parameter X's time or parameter score is more likely to be part of the normal population or of the impaired population. Using basic forms of statistical analyses, the mean and standard deviations of both comparative data sets may be readily computed and the probability that X is more likely associated with either of the two groups may be readily determined.”). Regarding claim 4, Seidenspinner teaches the system of claim 1, the system further comprising: a camera ([0143] “camera”); wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: receive a face image of the individual indicating the individual is watching the visual display ([0091] “By predefined criteria, the face construct can then be determined to be in the correct orientation and distance to enable measurements to be taken.”); wherein indicating no impairment is based at least in part on the face image of the individual indicating the individual is watching the visual display ([0091]; [0094] “Upon achieving appropriate orientation and distance, the system may then alert the user that the system is ready to deliver the stimulus and take measurements, e.g., with a visual alert or audible tone.”; [0100]). Regarding claim 5, Seidenspinner teaches the system of claim 1, wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: compare the movement information with a movement threshold ([0100] “the stimulus applied will be sufficient to result in a measurable change in at least one body muscle or activity associated with impairment and that the recording or measurement tool is sensitive to this change and has sufficient resolution to detect the change associated with impairment by either timescale of the monitoring or sensitivity of measurement”); and wherein indicating no impairment is based at least in part on the movement information being greater than the movement threshold ([0100] “In related instances, the stimulus and the lack of response by a body region may in itself represent a degree of impairment when an unimpaired individual would be anticipated to have a response.”). Regarding claim 6, Seidenspinner teaches the system of claim 1, wherein the movement impairment model is a machine learning model ([0050] “Assessment of impairment likelihood and degree of impairment may then be determined in a quantitative fashion. Quantitative results may then be obtained with use of one or more analysis techniques, such as machine learning algorithms”) trained using at least one hundred instances of movement information data ([0053] “Data from the recording is then analyzed for concordance with the timing and magnitude of the response with one or more training datasets or values representative of normative and impaired response”; [0104]; [0114]; [0124]; [0142] “As video frame rate on modern smart phones in slow motion configuration is on the order of 120 frames per second, and super slow motion has frame rates on the order of 400 frames per second, these devices have sufficient data capture speed to discriminate events with resolution of 10 msec or less.” – The frame rate indicates that there are at least 100 instances of movement data used to train the machine learning model with video frames from multiple individuals). Regarding claim 7, Seidenspinner teaches the system of claim 6, wherein the at least one hundred instances of movement information correspond to at least ten different individuals undergoing a movement based impairment test ([0124] “the modeling may be from datasets arising from a combination of individuals which may or may not include the subject themselves.”; Fig. 4: [0131] “FIG. 4 shows the discrimination of values grouped with a classifier algorithm. This hypothetical graph indicates a subject's data value (X) plotted against two comparative data sets, normal (.circle-solid.) and impaired (◯).” – Fig. 4 depicts more than ten different datasets, which can be from unique individuals). Regarding claim 8, Seidenspinner teaches the system of claim 1, wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: cause a request to be sent to the individual to perform an additional impairment test ([0103] “a more extended monitoring time may be employed, e.g., tens of seconds, or minutes, for a variety of purposes, e.g., to enable additional stimuli to be applied, stimulus response recovery times to be tracked, etc.”). Regarding claim 9, Seidenspinner teaches the system of claim 8, wherein the additional impairment test is selected from a group consisting of: a facial image based impairment test, and a voice based impairment test ([0077] “facial image data structure database 140 that, when employed with measured subject data, enable determination of possible impairment”; [0049]). Regarding claim 10, Seidenspinner teaches the system of claim 1, wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: forward the movement information to a user for classification when the probability is both less than the first threshold and greater than the second threshold ([0131] “FIG. 4 shows the discrimination of values grouped with a classifier algorithm. This hypothetical graph indicates a subject's data value (X) plotted against two comparative data sets, normal (.circle-solid.) and impaired (◯).”; [0132] “test if parameter X's time or parameter score is more likely to be part of the normal population or of the impaired population. Using basic forms of statistical analyses, the mean and standard deviations of both comparative data sets may be readily computed and the probability that X is more likely associated with either of the two groups may be readily determined. This is a common statistical exercise, e.g., done using classical statistical techniques, or by using a machine learning based classifier algorithm.”). Regarding claim 11, Seidenspinner teaches the system of claim 1, wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: report the likelihood of impairment to a recipient device apart from the one or more processors ([0150-0151] “Upon completion of the evaluation, the clinician (third party 580) may receive the findings and incorporate these into the subject's medical history or these may automatically populate into an electronic health record.”). Regarding claim 12, Seidenspinner teaches a method for detecting impairment based upon movement information, the method comprising: displaying, by a visual display, a disorienting video stream for an individual to watch ([0083] “When light stimulation is employed, one form of stimulus is that of a bright momentary flash of visible white light encompassing multiple frequencies of light. Alternate embodiments may employ a strobing light with a set number of flashes whose timing and numbers of flashes is preset or responsive to measured parameters. These light bursts are preferably emitted by device of the invention such as device 110 or by a component in direct communication with or affixed to device 110.”; [0143] “ability to record from either face of the phone, i.e., the front having a display and the back, as well as the ability to deliver a light stimulus, i.e., from the front display or back facing camera flash feature, a user may hold the phone in such a manner to both enter commands through the touch pad display, and in so-called “selfie” mode, hold the camera to enable the stimulus to be delivered and a video record obtained”); receiving, by a processor ([0054-0056]), movement information from a movement sensor included in a detached monitor device ([0002] “The device for this impairment evaluation may be a handheld unit, e.g., a smart phone, suitably programmed to provide the delivery of a stimulus, a record of the response to the stimulus, and perform at least a portion of the analysis of the response for determination of possible impairment of the measured individual.”; [0096] “movement detectors, e.g., accelerometers, may be positioned on the body region or incorporated into the stimulus device, e.g., a smartphone, and utilized for the purpose of detection of muscle movement. In still other instances, other methods, e.g., audio recordings or infrared movement detectors, may be employed to detect motion or other involuntary responses.”); applying, by the processor, a movement impairment model to the movement information to yield a probability that the individual watching the disorienting video stream is impaired ([0120] “Determination of how the timing or magnitude of the response of the data values associated with the response compares to normative or impaired values is utilized in various embodiments of the invention to assess the likelihood and degree of possible impairment. One means of accomplishing this objective is the mathematical comparison between the observed response value(s) of one or more parameters or sections of a response versus those same response parameters in comparative data sets that preferably includes both normative and impaired values. In certain instances, mathematical transforms of the data sets, e.g., parameters, equations, or algorithms defining the data sets or values and the variance associated with these may be employed.”; [0124]); indicating, by the processor, a likelihood of impairment based at least in part on a determination that the probability exceeds a first threshold ([0133] “whether such impairment exceeds a pre-determined level or threshold. The scoring may be directly associated with a threshold value or indirectly as a probability.”); and indicating, by the processor, no impairment when the probability is less than a second threshold ([0131-0132] “parameter X's time or parameter score is more likely to be part of the normal population or of the impaired population. Using basic forms of statistical analyses, the mean and standard deviations of both comparative data sets may be readily computed and the probability that X is more likely associated with either of the two groups may be readily determined.”). Regarding claim 15, Seidenspinner teaches the method of claim 12, the system further comprising: receiving, by the processor, a face image of the individual indicating the individual is watching the visual display ([0091] “By predefined criteria, the face construct can then be determined to be in the correct orientation and distance to enable measurements to be taken.”); and wherein indicating no impairment is based at least in part on the face image of the individual indicating the individual is watching the visual display ([0091]; [0094] “Upon achieving appropriate orientation and distance, the system may then alert the user that the system is ready to deliver the stimulus and take measurements, e.g., with a visual alert or audible tone.”; [0100]). Regarding claim 16, Seidenspinner teaches the method of claim 12, the method further comprising: comparing, by the processor, the movement information with a movement threshold ([0100] “the stimulus applied will be sufficient to result in a measurable change in at least one body muscle or activity associated with impairment and that the recording or measurement tool is sensitive to this change and has sufficient resolution to detect the change associated with impairment by either timescale of the monitoring or sensitivity of measurement”); and wherein indicating no impairment is based at least in part on the movement information being greater than the movement threshold ([0100] “In related instances, the stimulus and the lack of response by a body region may in itself represent a degree of impairment when an unimpaired individual would be anticipated to have a response.”). Regarding claim 17, Seidenspinner teaches the method of claim 12, wherein the movement impairment model is a machine learning model ([0050] “Assessment of impairment likelihood and degree of impairment may then be determined in a quantitative fashion. Quantitative results may then be obtained with use of one or more analysis techniques, such as machine learning algorithms”) trained using at least one hundred instances of movement information data ([0053] “Data from the recording is then analyzed for concordance with the timing and magnitude of the response with one or more training datasets or values representative of normative and impaired response”; [0104]; [0114]; [0124]; [0142] “As video frame rate on modern smart phones in slow motion configuration is on the order of 120 frames per second, and super slow motion has frame rates on the order of 400 frames per second, these devices have sufficient data capture speed to discriminate events with resolution of 10 msec or less.” – The frame rate indicates that there are at least 100 instances of movement data used to train the machine learning model with video frames from multiple individuals). Regarding claim 18, Seidenspinner teaches the method of claim 17, wherein the at least one hundred instances of movement information correspond to at least ten different individuals undergoing a movement based impairment test ([0124] “the modeling may be from datasets arising from a combination of individuals which may or may not include the subject themselves.”; Fig. 4: [0131] “FIG. 4 shows the discrimination of values grouped with a classifier algorithm. This hypothetical graph indicates a subject's data value (X) plotted against two comparative data sets, normal (.circle-solid.) and impaired (◯).” – Fig. 4 depicts more than ten different datasets, which can be from unique individuals). Regarding claim 19, Seidenspinner teaches the method of claim 12, the method further comprising: causing a request to be sent to the individual to perform an additional impairment test ([0103] “a more extended monitoring time may be employed, e.g., tens of seconds, or minutes, for a variety of purposes, e.g., to enable additional stimuli to be applied, stimulus response recovery times to be tracked, etc.”), wherein the additional impairment test is selected from a group consisting of: a facial image based impairment test, and a voice based impairment test ([0077] “facial image data structure database 140 that, when employed with measured subject data, enable determination of possible impairment”; [0049]). Regarding claim 20, Seidenspinner teaches a non-transient computer readable medium having stored therein instructions, which when executed by a hardware processing system cause the hardware processing system to: cause a disorienting video stream to play on a visual display for an individual to watch ([0083] “When light stimulation is employed, one form of stimulus is that of a bright momentary flash of visible white light encompassing multiple frequencies of light. Alternate embodiments may employ a strobing light with a set number of flashes whose timing and numbers of flashes is preset or responsive to measured parameters. These light bursts are preferably emitted by device of the invention such as device 110 or by a component in direct communication with or affixed to device 110.”; [0143] “ability to record from either face of the phone, i.e., the front having a display and the back, as well as the ability to deliver a light stimulus, i.e., from the front display or back facing camera flash feature, a user may hold the phone in such a manner to both enter commands through the touch pad display, and in so-called “selfie” mode, hold the camera to enable the stimulus to be delivered and a video record obtained”); receive movement information from a movement sensor ([0002] “The device for this impairment evaluation may be a handheld unit, e.g., a smart phone, suitably programmed to provide the delivery of a stimulus, a record of the response to the stimulus, and perform at least a portion of the analysis of the response for determination of possible impairment of the measured individual.”; [0096] “movement detectors, e.g., accelerometers, may be positioned on the body region or incorporated into the stimulus device, e.g., a smartphone, and utilized for the purpose of detection of muscle movement. In still other instances, other methods, e.g., audio recordings or infrared movement detectors, may be employed to detect motion or other involuntary responses.”; [0071-0073]); apply a movement impairment model to the movement information to yield a probability that the individual watching the disorienting video stream is impaired ([0120] “Determination of how the timing or magnitude of the response of the data values associated with the response compares to normative or impaired values is utilized in various embodiments of the invention to assess the likelihood and degree of possible impairment. One means of accomplishing this objective is the mathematical comparison between the observed response value(s) of one or more parameters or sections of a response versus those same response parameters in comparative data sets that preferably includes both normative and impaired values. In certain instances, mathematical transforms of the data sets, e.g., parameters, equations, or algorithms defining the data sets or values and the variance associated with these may be employed.”; [0124]), wherein the movement impairment model is a machine learning model ([0050] “Assessment of impairment likelihood and degree of impairment may then be determined in a quantitative fashion. Quantitative results may then be obtained with use of one or more analysis techniques, such as machine learning algorithms”) trained using at least one hundred instances of movement information data ([0053] “Data from the recording is then analyzed for concordance with the timing and magnitude of the response with one or more training datasets or values representative of normative and impaired response”; [0104]; [0114]; [0124]; [0142] “As video frame rate on modern smart phones in slow motion configuration is on the order of 120 frames per second, and super slow motion has frame rates on the order of 400 frames per second, these devices have sufficient data capture speed to discriminate events with resolution of 10 msec or less.” – The frame rate indicates that there are at least 100 instances of movement data used to train the machine learning model with video frames from multiple individuals), and wherein the at least one hundred instances of movement information correspond to at least ten different individuals undergoing a movement based impairment test ([0124] “the modeling may be from datasets arising from a combination of individuals which may or may not include the subject themselves.”; Fig. 4: [0131] “FIG. 4 shows the discrimination of values grouped with a classifier algorithm. This hypothetical graph indicates a subject's data value (X) plotted against two comparative data sets, normal (.circle-solid.) and impaired (◯).” – Fig. 4 depicts more than ten different datasets, which can be from unique individuals); indicate a likelihood of impairment based at least in part on a determination that the probability exceeds a first threshold ([0133] “whether such impairment exceeds a pre-determined level or threshold. The scoring may be directly associated with a threshold value or indirectly as a probability.”); and indicate no impairment when the probability is less than a second threshold ([0131-0132] “parameter X's time or parameter score is more likely to be part of the normal population or of the impaired population. Using basic forms of statistical analyses, the mean and standard deviations of both comparative data sets may be readily computed and the probability that X is more likely associated with either of the two groups may be readily determined.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 13- are rejected under 35 U.S.C. 103 as being unpatentable over US 20220218253 A1 (Seidenspinner, Don P.) in view of US 20150212063 A1 (Wojcik et al.), further in view of US 20080281550 A1 (Hogle et al.). Regarding claim 2, Seidenspinner teaches the system of claim 1, the system further comprising: a camera ([0143] “camera”). Seidenspinner does not explicitly teach wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: receive an image of surroundings of the individual; and based upon the image showing one or more physical supports around the individual, cause a request for the individual to move to another location beyond reach of physical supports. However, Wojcik teaches wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: receive an image of surroundings of the individual ([0095] “Camera 18 takes a digital image 1000 of the user, which shows at least a portion of the user's face 1002, a portion of the breath tube 14, and a portion of the housing front panel 1 that are within the field of view of camera”); and based upon the image showing one or more physical supports around the individual, cause a request for the individual to move to another location ([0111-0113] “a series of messages 602 are displayed to Offender 202 on OLED Display 11 (like that shown in FIG. 6) such as: "AVOID DIRECT SUNLIGHT," "CLEAR FACE OF OBSTRUCTIONS," "STAND OR SIT UP STRAIGHT," and "BREATH TUBE MUST BE LEVEL,”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Seidenspinner to include an indication of the user relying on a physical support and requesting them to move. One would have been motivated to make this modification because the facial image analysis is needed to determine movement of the eyes, eyebrows, nose, mouth, etc. and if the face is obstructed or the person is leaning on or laying on a support rather than sitting or standing up, the image will be poor quality for analysis, so it is requested that they clear obstructions, stand up, and or/sit down, as suggested by Wojcik [0110-0114]. Seidenspinner in view of Wojcik does not explicitly teach cause a request for the individual to move to another location beyond reach of physical supports. However, Hogle teaches cause a request for the individual to move to another location beyond reach of physical supports ([0100] “Mild Impairment: Performs head turns smoothly with slight change in gait velocity, i.e., minor disruption to smooth gait path or uses walking aid. [0101] (1) Moderate Impairment: Performs head turns with moderate change in gait velocity, slows down, staggers but recovers, can continue to walk. [0102] (0) Severe Impairment: Performs task with severe disruption of gait, i.e., staggers outside 15'' path, loses balance, stops, reaches for wall.”; [0120-0126]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Seidenspinner in view of Wojcik to include requesting the individual to move to another location beyond reach of physical supports. One would have been motivated to make this modification because an assessment of non-impairment requires the individual to not rely upon physical supports because an individual leaning on a physical support such as a walking aid or a wall indicates impairment, as suggested by Hogle [0100-0126]. Regarding claim 13, Seidenspinner teaches the method of claim 12. Seidenspinner does not explicitly teach receiving, by the processor, an image of surroundings of the individual from a camera; and based upon the image showing one or more physical supports around the individual, causing, by the processor, a request for the individual to move to another location beyond reach of physical supports. However Wojcik teaches receiving, by the processor, an image of surroundings of the individual from a camera ([0095] “Camera 18 takes a digital image 1000 of the user, which shows at least a portion of the user's face 1002, a portion of the breath tube 14, and a portion of the housing front panel 1 that are within the field of view of camera”); and based upon the image showing one or more physical supports around the individual, causing, by the processor, a request for the individual to move to another location ([0111-0113] “a series of messages 602 are displayed to Offender 202 on OLED Display 11 (like that shown in FIG. 6) such as: "AVOID DIRECT SUNLIGHT," "CLEAR FACE OF OBSTRUCTIONS," "STAND OR SIT UP STRAIGHT," and "BREATH TUBE MUST BE LEVEL,”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the method taught by Seidenspinner to include an indication of the user relying on a physical support and requesting them to move. One would have been motivated to make this modification because the facial image analysis is needed to determine movement of the eyes, eyebrows, nose, mouth, etc. and if the face is obstructed or the person is leaning on or laying on a support rather than sitting or standing up, the image will not be able to be poor quality for analysis, so it is requested that they clear obstructions, stand up, and or/sit down, as suggested by Wojcik [0110-0114]. Seidenspinner in view of Wojcik does not explicitly teach causing, by the processor, a request for the individual to move to another location beyond reach of physical supports. However, Hogle teaches causing, by the processor, a request for the individual to move to another location beyond reach of physical supports ([0100] “Mild Impairment: Performs head turns smoothly with slight change in gait velocity, i.e., minor disruption to smooth gait path or uses walking aid. [0101] (1) Moderate Impairment: Performs head turns with moderate change in gait velocity, slows down, staggers but recovers, can continue to walk. [0102] (0) Severe Impairment: Performs task with severe disruption of gait, i.e., staggers outside 15'' path, loses balance, stops, reaches for wall.”; [0120-0126]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the method taught by Seidenspinner in view of Wojcik to include requesting the individual to move to another location beyond reach of physical supports. One would have been motivated to make this modification because an assessment of non-impairment requires the individual to not rely upon physical supports because an individual leaning on a physical support such as a walking aid or a wall indicates impairment, as suggested by Hogle [0100-0126]. Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US 20220218253 A1 (Seidenspinner, Don P.) in view of US 20150212063 A1 (Wojcik et al.). Regarding claim 14, Seidenspinner teaches the method of claim 12. Seidenspinner does not explicitly teach receiving, by the processor, an image of surroundings of the individual; and wherein indicating no impairment is based at least in part on the image showing the individual located away from a physical support. However, Wojcik teaches receiving, by the processor, an image of surroundings of the individual ([0095] “Camera 18 takes a digital image 1000 of the user, which shows at least a portion of the user's face 1002, a portion of the breath tube 14, and a portion of the housing front panel 1 that are within the field of view of camera”); and wherein indicating no impairment is based at least in part on the image showing the individual located away from a physical support ([0111-0113] “a series of messages 602 are displayed to Offender 202 on OLED Display 11 (like that shown in FIG. 6) such as: "AVOID DIRECT SUNLIGHT," "CLEAR FACE OF OBSTRUCTIONS," "STAND OR SIT UP STRAIGHT," and "BREATH TUBE MUST BE LEVEL,”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the method taught by Seidenspinner to include an indication of the user relying on a physical support and based on this, outputting a non-impaired result. One would have been motivated to make this modification because the facial image analysis is needed to determine movement of the eyes, eyebrows, nose, mouth, etc. and if the face is obstructed or the person is leaning on or laying on a support rather than sitting or standing up, the image will not be able to be poor quality for analysis, as suggested by Wojcik [0110-0114]. Regarding claim 3, Seidenspinner teaches the system of claim 1, the system further comprising: a camera ([0143] “camera”). Seidenspinner does not explicitly teach wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: receive an image of surroundings of the individual; and wherein indicating no impairment is based at least in part on the image showing the individual located away from a physical support. However, Wojcik teaches wherein the non-transient computer readable medium further having stored therein instructions which when executed by the one or more processors, causes the one or more processors to: receive an image of surroundings of the individual ([0095] “Camera 18 takes a digital image 1000 of the user, which shows at least a portion of the user's face 1002, a portion of the breath tube 14, and a portion of the housing front panel 1 that are within the field of view of camera”); and wherein indicating no impairment is based at least in part on the image showing the individual located away from a physical support ([0111-0113] “a series of messages 602 are displayed to Offender 202 on OLED Display 11 (like that shown in FIG. 6) such as: "AVOID DIRECT SUNLIGHT," "CLEAR FACE OF OBSTRUCTIONS," "STAND OR SIT UP STRAIGHT," and "BREATH TUBE MUST BE LEVEL,”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to have modified the system taught by Seidenspinner to include an indication of the user relying on a physical support and based on this, outputting a non-impaired result. One would have been motivated to make this modification because the facial image analysis is needed to determine movement of the eyes, eyebrows, nose, mouth, etc. and if the face is obstructed or the person is leaning on or laying on a support rather than sitting or standing up, the image will be poor quality for analysis, as suggested by Wojcik [0110-0114]. Response to Arguments Applicant's arguments filed November 21, 2025 have been fully considered but they are not persuasive. With respect to the 102 Rejections in the Non-Final Office Action (See Pages 10-11 of Applicant’s Response “Claim Rejections under 35 U.S.C. §102”), Applicant argues that Seidenspinner does not teach a disorienting video stream to play for an individual or applying a movement impairment model to the movement information to yield a probability that the individual watching the disorienting video is impaired, and therefore cannot disclose the limitations of the independent claims. With respect to the 103 rejections, Applicant states on pages 11-13 of Applicant’s Response that Wojcik fails to remedy the defects of Seidenspinner above, and neither Wojcik nor Seidenspinner suggest having the individual move to another location beyond reach of physical supports during impairment detection as recited in the amended claims. MPEP § 2111 discusses proper claim interpretation, including giving claims their broadest reasonable interpretation in light of the specification during examination. Under broadest reasonable interpretation (BRI), the words of a claim must be given their plain meaning unless such meaning is inconsistent with the specification, and it is improper to import claim limitations from the specification into the claim. The requirements for anticipation are discussed in MPEP § 2131. MPEP § 2131 notes that “To reject a claim as anticipated by a reference, the disclosure must teach every element required by the claim under its broadest reasonable interpretation.” Under BRI, the stimulation described by Seidenspinner reads on the “disorienting video stream” claim limitation of claims 1, 12, and 20. Sedienspinner teaches displaying a stimulus such as a number of flashes of light on a phone display [0083, 0143]. The “disorienting video stream” may be interpreted to be any visual stimulus that is capable of disorienting the individual, such as a number of timed flashing lights on the phone display which is used to record the user’s reaction to determine impairment [0142-0143]. Therefore, Seidenspinner reads on the amended claim language under BRI. There are new grounds of claim rejections that were necessitated by the claim amendments. The amended limitations reciting having the individual move to another location beyond reach of physical supports during impairment detection of claim 2 and claim 13 are taught by Seidenspinner in view of Wojcik, further in view of Hogle, as described in the 103 rejections above. Claims 2-11 and 13-19 are rejected because the rejection of independent claims 1, 12, and 20 are proper and the prior art teaches or suggests all the features of these claims for the reasons described in the 102 and 103 Rejections. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVELYN GRACE PARK whose telephone number is (571)272-0651. The examiner can normally be reached Monday - Friday, 9AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert (Tse) Chen can be reached at (571)272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EVELYN GRACE PARK/Examiner, Art Unit 3791 /TSE W CHEN/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §102, §103
Nov 21, 2025
Response Filed
Feb 11, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594006
SMARTPHONE APPLICATION WITH POP-OPEN SOUNDWAVE GUIDE FOR DIAGNOSING OTITIS MEDIA IN A TELEMEDICINE ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12588835
METHOD AND SYSTEM FOR TRACKING MOVEMENT OF A PERSON WITH WEARABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12569147
FLUID RESPONSIVENESS DETECTION DEVICE AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12564390
A BIOPSY ARRANGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12557991
TEMPERATURE MEASUREMENT DEVICE AND SYSTEM FOR DETERMINING A DEEP INTERNAL TEMPERATURE OF A HUMAN BEING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
99%
With Interview (+46.9%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 80 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month