Prosecution Insights
Last updated: April 19, 2026
Application No. 19/237,779

SYSTEM AND METHOD FOR ASSESSING RESPIRATION

Final Rejection §101§102§103§112
Filed
Jun 13, 2025
Examiner
COOPER, JONATHAN EPHRAIM
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Reflexion Interactive Technologies Inc.
OA Round
2 (Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
62 granted / 134 resolved
-23.7% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
50 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
23.9%
-16.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 6, filed 01/23/2026, with respect to the objection to Claim 21 have been fully considered and are persuasive. The objection to Claim 21 has been withdrawn. However, new objections to Claims 26, 29 and 31 have been made. Applicant’s arguments, see page 6, filed 01/23/2026, with respect to the interpretation of the claims under 35 U.S.C. § 112(f) are acknowledged. However, at least the 112(f) interpretation of "pose tracking system" remains in the current rejection. Applicant's arguments filed 01/23/2026 have been fully considered but they are not persuasive. 35 U.S.C. § 112(b) Regarding the rejection of Claim 27 under 35 U.S.C. § 112(b), the applicant has argued “Applicant has amended claim 27 solely to advance prosecution”. However, the indefiniteness issue identified by the Examiner in the previous Office Action remains in the claim language. Claim 27 now recites “at least one baseline of the one or more baselines”, and there are no baselines recited in parent Claim 17. In addition, a new rejection of Claim 17 under 35 U.S.C. § 112(b) has been made. 35 U.S.C. § 101—Step 2A, Prong One The applicant has argued the claims are not directed to a mental process. Specifically, regarding the claim limitation “wherein the one or more pose tracking systems are configured to detect the position and orientation of the user's head, the position and orientation defining pose data; ... isolate spatial components of the pose data in a fixed coordinate system and determine the displacement of the user's head during the breathing cycle from the pose data, [and] produce a signal representative of the user's breathing” in Claim 17, the applicant has argued “Such micro-movements, including instances in which they are confounded by non-breathing movements, and corresponding determination of positions within a 5 mm window are not reasonably perceptible nor accomplishable by the human mind” These arguments are not commensurate with the scope of the claims. The broadest reasonable interpretation of the claim language is not limited to micro-movements or determinations of positions within a 5 mm window, but rather includes situations that are practically performed in the human mind as explained in the updated 35 U.S.C. § 101 rejection below. 35 U.S.C. § 101—Step 2A, Prong Two The applicant has argued “The claimed invention provides improved breath detection and truly individualized, low-latency feedback. As summarized in Applicant's Specification, the claimed invention addresses the variability in the direction and extent of individual head motion during breathing by isolating elements of head pose and determining the displacement associated with breathing. The claimed invention uses true pose tracking to (a) obtain accurate, drift-controlled head pose, including small longitudinal and vertical displacements, and (b) to use that mapped pose data to assess breathing while handling cross-subject variability in head-movement patterns.” However, isolating elements of head pose and determining the displacement associated with breathing are clearly mathematical concepts. Even though the Applicant alleges these steps enhance accuracy of breathing assessment, lead to fewer resource demands, and are particularly covered in the claims, it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology. In other words, the improvement in technology cannot come from improvement in the abstract idea. See MPEP 2106.05(a). 35 U.S.C. § 101—Step 2B The applicant has asserted that claim 17 adds limitations other than what is well-understood, routine, conventional activity in the field of breath detection, specifically. The applicant has asserted “[The amended claim limitations] are not merely invoked as tools to perform an existing mental process, but representations of a new workflow designed to address specific problems with respiration detection encountered when merely deriving motion data without a fixed, anatomical frame”. However, even if a new workflow has been designed, this alleged new workflow (i.e. isolating spatial components of the pose data in a fixed coordinate system and determining the displacement of the user's head during the breathing cycle from the pose data) are still clearly mathematical concepts. As stated above, the improvement in technology cannot come from improvement in the abstract idea. See MPEP 2106.05(a). Specification The disclosure is objected to because of the following informalities: In [0107], “Figure 4 illustrates a sinusoidal movement with noise and the same data after exponential smoothing. Specifically, the top of Figure 4 illustrates the sinusoidal movement as detected and the bottom of Figure 4 illustrates the sinusoidal movement detected after exponential smoothing (α=0.01)” should read “Figure [[4]] 6 illustrates a sinusoidal movement with noise and the same data after exponential smoothing. Specifically, the top of Figure 4 illustrates the sinusoidal movement as detected and the bottom of Figure [[4]] 6 illustrates the sinusoidal movement detected after exponential smoothing (α=0.01)”. Appropriate correction is required. Claim Objections Claims 26, 29 and 31 is objected to because of the following informalities: In Claim 26, “The system of claim 17, and wherein the at least one processor is further configured to establish one or more baselines for elements of the user's pose” should read “The system of claim 17, [[and]] wherein the at least one processor is further configured to establish one or more baselines for elements of the user's pose”. In Claim 29, “The system of claim 28, the at least one processing device is configured to determine the longitudinal displacement of the user's head during the breathing cycle from the pose data” should read “The system of claim 28, wherein the at least one processing device is configured to determine the longitudinal displacement of the user's head during the breathing cycle from the pose data”. In Claim 31, “magnometer” should be spelled “magnetometer”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “one or more pose tracking systems” in Claim 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim 17 is being interpreted under 35 U.S.C. § 112(f) as it: Uses the nonce term “pose tracking systems” for the apparatus performing the specified function “pose tracking systems” is linked with the transitional phrase “configured to” and modified by the functional language “detect the position and orientation of the user's head, the position and orientation defining pose data” “pose tracking systems” is not modified by sufficient structure, material, or acts for performing the claimed function. This claim will be interpreted in accordance with the disclosure of the applicant on [0037] and [0041] as an inertial measurement unit (IMU) sensor and equivalents thereof. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 20 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 20 recites “wherein assessment of the user's breathing includes compiling the audio with the pose data to determine a breath state”. The applicant’s remarks filed 01/23/2026 assert “Support for the amendment to claim 20 may be found in, for example, Paragraphs 0098, 0107, and 0113” (page 6). However, none of these paragraphs mention compiling audio data with pose data, and the word “compile” does not appear in the written description. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 17-35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 recites the limitation "determine the displacement of the user's head during the breathing cycle from the pose data" and “assess the user's breathing based on the first signal and at least one of the determined displacement”. It is unclear what the applicant means by “at least one of the determined displacement” as there is only one recited displacement previously. For the purposes of substantive examination, the examiner is construing this claim limitation as “assess the user's breathing based on the first signal and ”. Claims 18-35 are rejected by virtue of dependence on Claim 17. Claim 27 recites the limitation " The system of claim 17, wherein at least one baseline of the one or more baselines is iteratively adjusted during the user's breathing ". There is insufficient antecedent basis for this limitation in the claim. There is no previous mention in Claim 17 of one or more baselines. For the purposes of substantive examination, the examiner is construing this claim limitation as “The system of claim [[17]] 26, wherein at least one baseline of the one or more baselines is iteratively adjusted during the user's breathing”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than an abstract idea. A streamlined analysis of claim 1 follows. Regarding Claim 17, the claim recites a system for assessing a user's breathing in one or more breathing cycles. Thus, the claim is directed to an apparatus, which is one of the statutory categories of invention (Step 1). The claim is then analyzed to determine whether it is directed to any judicial exception (Step 2A, Prong One). The following limitations set forth a judicial exception: collect pose data about the user's breathing from the one or more pose tracking systems isolate spatial components of the pose data in a fixed coordinate system, and determine the displacement of the user's head during the breathing cycle from the pose data produce a first signal representative of the user's breathing assess the user's breathing based on the first signal and at least one of the determined displacement optionally providing feedback regarding the user's breathing, the feedback comprising low-latency feedback to the user based on a comparison of the pose data to one or more rule sets for a desired breathing cycle These limitations describe a mathematical calculation and/or a mental process as the skilled artisan is capable of performing the recited limitations and making a mental assessment thereafter. Examiner also notes that nothing from the claims suggest that the limitations cannot be practically performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations and/or mental process steps in real time. Examiner also notes that nothing from the claims suggests an undue level of complexity that the mathematical calculations and/or the mental process steps cannot be practically performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations and/or mental process steps. For example: A human is capable of manually/mentally collecting pose data about the user's breathing from the one or more pose tracking systems, e.g. by reading data from a sensor. Isolating spatial components of the pose data in a fixed coordinate system and determining the displacement of the user's head during the breathing cycle from the pose data is a mathematical calculation that can be performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations in real time. A human is capable of manually/mentally producing a first signal representative of the user's breathing with the aid of a pen and paper, or using a generic computer to graph A human is capable of manually/mentally based on the first signal and at least one of the determined displacement by simply thinking, with the aid of a pen and paper, or using a generic computer as a tool to perform mathematical calculations and/or mental process steps in real time. A human is capable of manually/mentally providing feedback regarding the user's breathing, the feedback comprising low-latency feedback to the user based on a comparison of the pose data to one or more rule sets for a desired breathing cycle, e.g. audibly, visually, or with the aid of a pen and paper. The Examiner notes that the broadest reasonable interpretation of “low-latency feedback” encompasses embodiments able to be practically performed in the human mind, such as verbal feedback within a few seconds. Next, the claim as a whole is analyzed to determine whether any element, or combination of elements, integrates the identified judicial exception into a practical application (Step 2A, Prong Two). The following limitations amount to insignificant extra-solution activity to the judicial exception, e.g. mere data gathering. See MPEP 2106.05(g). one or more pose tracking systems, wherein the one or more pose tracking systems are configured to detect the position and orientation of the user's head, the position and orientation defining pose data The following limitations amount to a recitation of the words "apply it" (or an equivalent) and/or nothing more than mere instructions to implement the abstract idea on a generic computer. See MPEP 2106.05(f). at least one processing device configured to… Therefore, these additional limitations do not integrate the judicial exception into a practical application. Next, the claim as a whole is analyzed to determine whether any element, or combination of elements, amounts to significantly more than the identified judicial exception (Step 2B): The following limitations do not amount to significantly more than the abstract idea for substantially similar reasons applied in Step 2A, Prong Two. one or more pose tracking systems, wherein the one or more pose tracking systems are configured to detect the position and orientation of the user's head, the position and orientation defining pose data… at least one processing device configured to… The following limitations is/are considered to be well-understood, routine, and conventional (WURC). The one or more pose tracking systems is/are considered to be well-understood, routine, and conventional based on statement from the applicant' specification filed 06/13/2025 (“Pose tracking systems may comprise, in some embodiments, inertial measurement unit (IMU) sensors”, [0037]; “The pose tracking system may include pose tracking sensors such as inertial measurement unit (IMU) sensors”, [0041]; inertial measurement units under broadest reasonable interpretation are commercially available products). The at least one processing device is considered to be well-understood, routine, and conventional based on statement from the applicant' specification filed 06/13/2025 (“[0144]-[0145]). Dependent Claims 22-30 and 33-35 also fail to add subject matter qualifying as significantly more to the abstract independent claims as they merely further limit the abstract idea. Dependent Claims 18-21, 23, and 28-32 also fail to add subject qualifying as significantly more to the abstract independent claims as they recite limitations that do not integrate the claims into a practical application for substantially similar reasons as set forth above. Dependent Claims 18-21 and 28-32 also fail to add subject matter integrating the judicial exception or qualifying as significantly more to the abstract independent claims as they do not recite significantly more than the identified abstract idea for substantially similar reasons as set forth above. Therefore, Claims 17-35 are not patent eligible under 35 U.S.C. § 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 17, 20-24, 26-29, and 31-34 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hale et al (US 20240370098 A1, hereinafter Hale). Regarding Claim 17, Hale discloses a system for assessing a user's breathing in one or more breathing cycles (See Figs. 1-4 and 5C), the system comprising: one or more pose tracking systems, wherein the one or more pose tracking systems are configured to detect the position and orientation of the user's head (“a camera configurable to detect at least a user's head position and orientation”, [0002]), the position and orientation defining pose data (“determining, based on the detected position of the head of the user, if at least part of a face of the user is directed towards to the displayed BIEIUI comprises determining a location (position x, y, z or distance to the device) of the head relative to the camera or display or microphone”, [0014]; “The orientation of the user's face is based on landmark features which are detected using any suitable technique known in the art to provide facial feature detection. For example, landmark features may include relative positions of eyes, nose, and mouth which can be used to determine a pitch, yaw, and, in some embodiments, a roll of a user's head”, [0134]); at least one processing device configured to (Element 68, Fig. 5C; [0047]-[0048]): collect pose data about the user's breathing from the one or more pose tracking systems (“the facial characteristic extractor 44 is configured to extract facial features such as depth, mouth area, facial orientation, gaze and the like, and provide a set of one or more parameters representing one or more facial characteristic(s), such as, for example, distance, orientation, expression, mouth area within threshold, to the breath input detector 52 and/or breath controller 12”, [0135]), isolate spatial components of the pose data in a fixed coordinate system (“The head position determined using the head tracker 30a may be provided in any suitable set of coordinates, for example, the position may be provided as xyz co-ordinates relative to the device 14 within the FoV of camera 18”, [0133]), and determine the displacement of the user's head during the breathing cycle from the pose data (“a breath input is determined to be acceptable based on the output 24a of the face tracker indicating a face is detected at a distance within a predetermined range of distances from the device 14”, [0131]; to detect a correct distance within a range of distances, the changes in distance, i.e. the displacement of the user’s head, must also be detected); produce a first signal representative of the user's breathing (“The output of the breath input controller module 128 comprises accepted breath input 36...”, [0136]); assess the user's breathing based on the first signal and at least one of the determined displacement (“The output of the breath input controller module 128 comprises accepted breath input 36 which is then processed by the electronic device, for example, by using one or more processors 68”, [0137]); and optionally providing feedback regarding the user's breathing (“Processors 68 are configured to cause the electronic device to perform functionality responsive to the received breath input depending on the specific configuration of the BIEUI 20 associated with the received breath input”, [0137]), the feedback comprising low-latency feedback (“Real-time user feedback on breathing input”, [0185]) to the user (“determining 408 a conformance of one or more characteristics of the detected audio input with corresponding one or more characteristics of a predetermined type of intentional breath input to the breath input training BIEUI 18, and causing 410 at least one visual indicator of the conformance to be presented in the breath input training BIEUI”, [0165]) based on a comparison of the pose data to one or more rule sets (See Figs. 16A-16E; where the user’s head position and orientation are compared to a “correct” standard for proper breath detection as disclosed in [0162]; “By providing visual feedback to a user, the user is made aware of how they can optimally align their head position and/or orientation so as to increase the probability of intentional breath being detected by the electronic device and provided as acceptable breath input to a breath enabled user interface”, [0042]) for a desired breathing cycle (“correct positioning by a user automatically triggers an update of the UI to the next screen... once a user's head is in a correct position for their breath input to be optimally detected as intentional breath input, the training session screen transitions and presents a sequence of screens where visual/audio indicators are provided to guide a user to provide breath input which is optimally likely to be considered intentional input of a particular type based on the audio signal detected from the user's breath”, [0162]). Regarding Claim 20, Hale discloses the system of claim 17, further comprising a microphone (Element 16, Figs. 2 and 5C), wherein the microphone tracks audio associated with breathing of the user (“In FIG. 5C, a user generates an audio input 26 by breathing 12 into a microphone 16 of a device 14 such as an electronic device”, [0125]), and wherein the at least one processing device is configured to collect audio from the microphone (“The audio input is provided as raw input to a breath audio signal detector 32 which may track the audio input using a breath audio input tracker 32a”, [0125]), and wherein assessment of the user's breathing includes compiling the audio with the pose data (“once a user's head is in a correct position for their breath input to be optimally detected as intentional breath input, the training session screen transitions and presents a sequence of screens where visual/audio indicators are provided to guide a user to provide breath input which is optimally likely to be considered intentional input of a particular type based on the audio signal detected from the user's breath”, [0162]) to determine a breath state (See Figs. 18-24; “OUT”, “HOLD”, and “IN” are breath states). Regarding Claim 21, Hale discloses the system of claim 17z wherein the at least one processing device is further configured to provide a cue to the user prior to gathering data about the user's breathing (“FIG. 18 illustrates schematically an example screenshot sequence 80a to 80f in a breath input enabled user interface configured to train a user to provide a breath input to a breath input enabled user interface input by exhaling using visual indicators provided on a BIEUI 18 on a display 22, for example, by a device 14”, [0166]), wherein the cue is one of a cue type (See Figs. 18-24; types of visual cue types “OUT”, “HOLD”, and “IN” are disclosed). Regarding Claim 22, Hale discloses the system of claim 21, wherein assessing the user's breathing comprises assessing compliance with the cue type (See Figs. 17A-17B; “ An example of a foreground graphic visual indicator is an animation which shows a shape such as a triangle which fills up with successful breath input over time. For example, the shape or triangle may fill up with breath exhalation input being detected to be a certain magnitude, or dependent on the determined flow rate of breath exhalation”, [0170]), and wherein the cue type is one of inhale, exhale, or hold (See Figs. 18-24; types of visual cue types “OUT”, “HOLD”, and “IN” are disclosed), and wherein the cue represents a rule set of the one or more rule sets (The visual cue types “OUT”, “HOLD”, and “IN” are the three possible breathing states; under broadest reasonable interpretation, they are also the three possible rule sets for breathing). Regarding Claim 23, Hale discloses the system of claim 24, wherein at least one of the one or more the pose tracking systems is disposed in a wearable or headset having a graphical display (“In some embodiments, the breath input enabled user interfaces, BIEUIs, may be provided as spatial interfaces in two or three dimensions... for example, if the display supports augmented or virtual reality applications then the BIEUI may comprise a three-dimensional user interface. Examples of displays which support AR or VR BIEUIs include headsets and/or hard or soft holographic displays”, [0243]), and wherein feedback includes a visual representation of breathing on the graphical display (“FIGS. 21 to 24 each illustrate schematically other examples of screenshots sequence in a breath input enabled user interface configured to train a user to provide a breath input to a breath input enabled user interface input by exhaling which provide examples of foreground and background visual indicators of a BIEUI 18 on a display 22, for example, by a device 14 performing a method of providing a BIEUI for training a user to provide more acceptable intentional breath input in some embodiments of the invention”, [0169]), and wherein the visual representation changes if a breath state is not in compliance with a cue type (“In some embodiments, a rippling out of colour outwards may provide a visual cue regarding how characteristics of a user's breath is being detected by the electronic device and, in some embodiments, checked for conformity with a target type of breath input”, [0174]; under broadest reasonable interpretation, this would also include instances where breath characteristics are not in conformity with a target type of breath input). Regarding Claim 24, Hale discloses the system of claim 22, wherein assessing compliance with cue type includes determining a breath state (See Figs. 18-24; types of visual cue types “OUT”, “HOLD”, and “IN” are breath states) of the user and comparing the breath state to the cue type (“In some embodiments, a rippling out of colour outwards may provide a visual cue regarding how characteristics of a user's breath is being detected by the electronic device and, in some embodiments, checked for conformity with a target type of breath input”, [0174]). Regarding Claim 26, Hale discloses the system of claim 17, and wherein the at least one processor is further configured to establish one or more baselines for elements of the user's pose (“In some embodiments, the distance is determined based on a consistent point of reference such as the display of the device but in some embodiments, it is assumed that the camera is mounted near the display and the distance is measured to the display/camera. The position from the camera is known, and the user's face and facial features which are measured by the camera may be measured using the camera. This allows a position to the display, microphone or any other consistent reference point for position measurements on the device to be inferred providing the relation between camera and device/mic is known”, [0015]). Regarding Claim 28, Hale discloses the system of claim 17, wherein the fixed coordinate system includes a longitudinal axis and a vertical axis (“ FIG. 4E shows schematically an example of a reference co-ordinate system which may be used by the electronic device 14 to implement an embodiment of the method of breath recognition”, [0120]; Fig. 4E shows a longitudinal axis and a vertical axis), and wherein the at least one processing device is configured to determine at least one of the longitudinal displacement and vertical displacement of the user's head during the breathing cycle from the pose data (“for example, the position may be provided as xyz co-ordinates relative to the device 14”, [0133]). Regarding Claim 29, Hale discloses the system of claim 28, the at least one processing device is configured to determine the longitudinal displacement of the user's head during the breathing cycle from the pose data (“for example, the position may be provided as xyz co-ordinates relative to the device 14”, [0133]). Regarding Claim 31, Hale discloses the system of claim 17, wherein the pose tracking system includes an external reference sensor (The display controller may be configured to a combination of one or more of the optical sensors 18, 122, along with the proximity sensor 112 and/or the accelerometer 114 in some embodiments to determine one or more of a user's head position, facial characteristics of the user's face, the orientation of a user's face relative to the display and/or camera and/or microphone and also, optionally, to track a user's gaze “”, [0223]) and at least one of an accelerometer (“FIG. 3A shows in the lower right hand side of the figure that the device yaw, pitch and roll is measured as well, for example, by using an accelerometer or the like in order to determine the relative face position of the user to the displayed BIEUI 20 on the screen 22 on or associated with device 14”, [0118]), magnometer, and a gyroscope (The Examiner notes these elements are listed as part of an alternative list and do not have to be disclosed by the reference in order for the limitation to be anticipated). Regarding Claim 32, Hale discloses the system of claim 31, wherein the reference sensor is selected from the group consisting of a camera, passive infrared sensors (“The detected head position of the user is provided using a camera 18 ... but may also be determined in some embodiments instead or in addition by using one or more image sensors 122 such as other optical or infra-red sensors 122 are configured to feed images in the form of video data input to facial tracking module 30”, [0105]), LIDAR sensors (“Examples of suitable smartphones and other types of electronic devices such as a tablet on which a method of breath input recognition or any of the other methods disclosed herein include... a LIDAR sensor”, [0131]), and ultrasonic sensors (The Examiner notes this element is listed as part of an alternative list and does not have to be disclosed by the reference in order for the limitation to be anticipated). Regarding Claim 33, Hale discloses the system of claim 17, wherein assessing the user's breathing comprises determining a breath state (See Fig. 22; “out” infers a determined breath state of inhalation). Regarding Claim 34, Hale discloses the system of claim 17, wherein the user's breathing includes a breath cycle having a plurality of windows (See Fig. 23; a window for breathing in and a window for breath hold are shown as exemplified in Figs. 19-20), and wherein the at least one processing device is configured to assess the users' breathing (“The output of the breath input controller module 128 comprises accepted breath input 36 which is then processed by the electronic device, for example, by using one or more processors 68”, [0137]), determine a breath state (See Figs. 18-20), and provide actionable feedback to the user relating to breath state within a same window of the plurality of windows (See Fig. 22; “FIG. 23 shows screenshots 90a to 90c which illustrate an example of how technical information relating to how a user's breath is being detected may be graphically indicated using visual feedback elements such a foreground triangle which is traced out as a user inhales and where a background visual effect comprises a soft gradient, a different contrast area or different colour circle expanding and becoming more vivid with the breath input”, [0174]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Hale in view of Djokovic et al (US 20190258315 A1, hereinafter Djokovic). Regarding Claim 18, Hale discloses the system of claim 17, wherein the one or more pose tracking systems further determine pitch of the user's head (“The pitch and yaw are measured in some embodiments to reduce the discrepancy between the user's detected pitch/yaw and the pitch yaw determined for optimum breath input”, [0118]), and wherein the at least one processing device is configured to optionally assess the user's breathing based on the first signal (“The output of the breath input controller module 128 comprises accepted breath input 36 which is then processed by the electronic device, for example, by using one or more processors 68”, [0137]), and wherein assessing the user's breathing produces a second signal representative of the user's breathing (“FIGS. 21-23 show example visual indicators of breath input”, [0172]; “FIG. 22 shows a ‘boid’ flocking simulation, with dispersion of the particles linked to breath input”, [0173]). Hale discloses the claimed invention except for expressly disclosing wherein the at least one processing device is configured to correlate changes in pitch with the determined displacement to identify a breath state and optionally assess the user's breathing based on the determined pitch. However, Djokovic, which is also directed towards a system for assessing a user's breathing in one or more breathing cycles (“FIG. 4 illustrates an embodiment of a process for determining a breathing rate from IMU data”, [0030]), teaches wherein the at least one processing device (Element 110, Fig. 2) is configured to correlate changes in pitch (Step 406, Fig. 4; “Similarly, the predefined range associated with the smoothed change in angle may correspond to an expected change in pitch or rotation about the x-axis (i.e., a left-right axis parallel with a width of the human body) indicative of a breath”, [0030]) with the determined displacement (Step 402, Fig. 4; “The media processing device 110 obtains 402 IMU data (e.g., from the IMU 284 in a head-mounted sensor) that represents the change in angle and change in position relative to a prior time block...”, [0030]) to identify a breath state and optionally assess the user's breathing based on the determined pitch (Step 408, Fig. 4, which is based on Steps 402-406, Fig. 4; “The media processing device 110 then identifies 406 a window of smoothed IMU data meeting predefined criteria for a detected breath. For example, the media processing device identifies a window in which the smoothed change in position values and/or the smoothed change in angle values is within a predefined expected range over the time window to identify breaths”, [0030]; the minimum/maximum/median position values of accelerometer data are indicative of breathing in, breathing out, and breath holds). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add the processing configuration steps of Djokovic to the system of Hale, because using additional data to determine breath state makes a more robust and reliable breath state determination configuration. Regarding Claim 19, modified Hale discloses the system of claim 18, wherein at least one of the one or more of the pose tracking systems is disposed in a wearable or in a virtual or augmented reality headset (“In some embodiments, the breath input enabled user interfaces, BIEUIs, may be provided as spatial interfaces in two or three dimensions... for example, if the display supports augmented or virtual reality applications then the BIEUI may comprise a three-dimensional user interface. Examples of displays which support AR or VR BIEUIs include headsets and/or hard or soft holographic displays”, [0243]). Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Hale in view of Ahmadi Noorbakhsh (US 20240050674 A1, hereinafter Ahmadi Noorbakhsh). Regarding Claim 25, Hale discloses the system of claim 21, wherein the feedback includes a score (“wherein the direction and speed at which the shape outline is traced is determined in real-time based on a score indicative of the conformance”, [0182]). Hale discloses the claimed invention except for expressly disclosing wherein the score is calculated by dividing a percentage of breath operations performed by the user matching corresponding cues by a total number of cues. However, Ahmadi Noorbakhsh, which is also directed towards a system for assessing a user's breathing in one or more breathing cycles (“An exemplary processing unit may be configured to receive an exemplary output signal from an exemplary sensor assembly and calculate breathing parameters”, [0063]), teaches wherein the score is calculated by dividing a percentage of breath operations performed by the user matching corresponding cues by a total number of cues (“processing unit 702 may further be configured to calculate a user's score, based at least in part on the relative percentage of a number of properly given breaths divided by a total number of the given breaths within a certain time interval. For example, if a total of 100 breaths are given within a 600 second time interval, and in which 90 breaths are counted as properly given breaths, then the processing unit 702 may calculate a 90% user's score”, [0135]). As both the score calculation method of Hale and the score calculation method of Ahmadi Noorbakhsh achieve the same goal of a quantitative assessment of a user’s breathing, one of ordinary skill in the art could have substituted one known element for another, and the results of the substitution would have been predictable (a quantitative assessment of a user’s breathing). Claims 27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Hale in view of Gwak et al (US 20240268711 A1, hereinafter Gwak). Regarding Claim 27, Hale discloses the system of claim 17. Hale discloses the claimed invention except for expressly disclosing wherein at least one baseline of the one or more baselines is iteratively adjusted during the user's breathing. However, Gwak teaches wherein at least one baseline of the one or more baselines (“the electronic device 101 uses a motion tracking algorithm to track the horizontal (X-axis) and vertical (Y-axis) movements of the facial landmarks 315 by detecting the X and Y coordinates of the center point of each facial landmark 315 in each frame of the video 210”, [0056]) is iteratively adjusted during the user's breathing (“In general, face motion signals can be vulnerable to noise or motion artifacts due to sudden voluntary or involuntary movements of the person during recording of the video 210. Thus, after the motion extraction operation 225, the electronic device 101 performs a motion artifact removal operation 230 to remove the motion artifacts from the motion signal”, [0057]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hale with Gwak, because face motion signals can be vulnerable to noise or motion artifacts due to sudden voluntary or involuntary movements of the person and the noise removal of Gwak removes this unnecessary data (Gwak, [0057]). Regarding Claim 30, Hale discloses the system of claim 26. Hale discloses the claimed invention except for expressly disclosing wherein the at least one processing device (Element 120, Fig. 1) is further configured to: approximate a breath cycle of the user as a DC-balanced waveform using at least one baseline of the one or more baselines; and identify a breath state of the user, wherein the breath state is determined by identifying at change in the direction of displacement of a user's head at or near a peak of the waveform. However, Gwak teaches wherein the at least one processing device is further configured to: approximate a breath cycle of the user as a DC-balanced waveform (“In general, face motion signals can be vulnerable to noise or motion artifacts due to sudden voluntary or involuntary movements of the person during recording of the video 210. Thus, after the motion extraction operation 225, the electronic device 101 performs a motion artifact removal operation 230 to remove the motion artifacts from the motion signal”, [0057]; the examiner notes this description corresponds to [0114] of the applicant’s specification, which explains that a DC balanced waveform is free of positional bias) using at least one baseline of the one or more baselines (“the electronic device 101 uses a motion tracking algorithm to track the horizontal (X-axis) and vertical (Y-axis) movements of the facial landmarks 315 by detecting the X and Y coordinates of the center point of each facial landmark 315 in each frame of the video 210”, [0056]); and identify a breath state of the user, wherein the breath state is determined by identifying at change in the direction of displacement of a user's head at or near a peak of the waveform (“a chart 401 in FIG. 4A depicts an example motion-based respiratory signal 240 over a forty second time window”, [0066]; see Fig. 4A; the peaks and valleys of the motion based signal correspond to the breath states of breathing in and breathing out). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hale with Gwak, because face motion signals can be vulnerable to noise or motion artifacts due to sudden voluntary or involuntary movements of the person and the noise removal of Gwak removes this unnecessary data (Gwak, [0057]). Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Hale in view of Platt et al (US 20230263423 A1, hereinafter Platt). Regarding Claim 35, Hale discloses the system of claim 34. Hale discloses the claimed invention except for expressly disclosing wherein the plurality of windows includes a Settle-in window, a Settled window, and a Settle-out window. However, teaches wherein the plurality of windows includes a Settle-in window, a Settled window, and a Settle-out window (“”, [0062]; under broadest reasonable interpretation, a 50 ms is short enough to be covered by a phase is when a user is to begin an inhalation or exhalation, when the user is following through with inhalation or exhalation, or when the user is approaching an inspiratory or expiratory pause or change in direction; this corresponds to the applicant’s definition of a Settle-in window, a Settled window, and a Settle-out window in [0065]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hale with Platt, because a greater sampling frequency leads to a more robust data set. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Suematsu (US 20190069810 A1) (Fig. 7 and [0069]). See Zhou (US 20190154785 A1) ([0076]). See Eisenhardt et al (US 20190269968 A1) ([0029). See Hsu et al (US 20220061753 A1) ([0063]). See the Non-Patent Literature (NPL) to Kurnikova et al (“Coordination of Orofacial Motor Actions into Exploratory Behavior by Rat”). See the Non-Patent Literature (NPL) to Liao et al (“A change in behavioral state switches the pattern of motor output that underlies rhythmic head and orofacial movements”). See the Non-Patent Literature (NPL) to Szczygieł et al (“Biomechanical influences on head posture and the respiratory movements of the chest”). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN EPHRAIM COOPER whose telephone number is (571)272-2860. The examiner can normally be reached Monday-Friday 7:30AM-5:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jacqueline Cheng can be reached at (571) 272-5596. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN E. COOPER/ Examiner, Art Unit 3791 /JACQUELINE CHENG/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Jun 13, 2025
Application Filed
Oct 14, 2025
Non-Final Rejection — §101, §102, §103
Jan 23, 2026
Response Filed
Mar 20, 2026
Final Rejection — §101, §102, §103
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12558001
MUSCLE FATIGUE DETERMINATION METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543963
APPARATUS AND METHOD FOR ESTIMATING BIO-INFORMATION
2y 5m to grant Granted Feb 10, 2026
Patent 12538956
Footwear Having Sensor System
2y 5m to grant Granted Feb 03, 2026
Patent 12507905
DEVICE AND METHOD FOR REAL TIME ASSESSMENT AND MONITORING OF THORACIC FLUID, AIR TRAPPING AND VENTILATION
2y 5m to grant Granted Dec 30, 2025
Patent 12465246
SYSTEMS FOR PHYSIOLOGICAL CHARACTERISTIC MONITORING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
79%
With Interview (+32.5%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month