Prosecution Insights
Last updated: April 17, 2026
Application No. 18/130,925

COMPUTATIONAL MODEL, METHOD, SYSTEM AND EXERCISE FRAMEWORK FOR ELEVATING CONSCIOUSNESS, INNER SYNCHRONIZATION AND GENERAL WELLNESS

Final Rejection §101§103§112
Filed
Apr 05, 2023
Examiner
HIGGS, STELLA EUN
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
2 (Final)
39%
Grant Probability
At Risk
3-4
OA Rounds
3y 8m
To Grant
73%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
138 granted / 352 resolved
-12.8% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
44 currently pending
Career history
396
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 352 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is made in response to the amendments/remarks filed on August 6, 2025. This action is made final. Claims 4-8, 10-13, 15-16, 18, 22-31 are pending. Claims 1-3 have been previously cancelled by preliminary amendment. Claims 9, 14, 17, 19-21 are presently cancelled. Claims 27-31 are newly added. Claims 4, 25, and 26 are independent claims. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed August 6, 2025 have been fully considered with respect to the art rejection, but is moot in light of the new grounds of rejection. Applicant’s argument with respect to the previous 101 rejection has been fully considered, but they are not persuasive. As to the 101 rejection, Applicant argues the amended claims provide technological solutions to the problem of improving yoga and wellness practice, which solutions cannot technically be provided by a human instructor. However, the examiner, respectfully, disagrees. MPEP 2106.04(d)(1) and MPEP 2106.05(a) indicates that a practical application may be present where the claimed invention provides a technical solution to a technical problem. See, e.g., DDR Holdings, LLC. v. Hotels.com, L.P., 773 F.3d 1245, 1259 (Fed. Cir. 2014) (finding that claiming a website that retained the “look and feel” of a host webpage provided a technological solution to the problem of retention of website visitors by utilizing a website descriptor that emulated the “look and feel” of the host webpage, where the problem arose out of the internet and was thus a technical problem). Here, the Applicant’s argued problem is not a technological problem caused by technology nor any technological environment. The problem of guiding a user in meditation (or “improving yoga and wellness practice” as asserted by applicant) is not a problem cause by any of the technological components argued and claimed by the application, such as the sensors or detectors, but rather, is a problem that existed and/or exists regardless of whether a sensors and detectors are involved in the process. At best, Applicant’s identified problem is a teaching or training problem and one that is organizing human activity. Because no technological problem is present, the claims do not provide a practical application. Furthermore, insomuch as Applicant argues the claims solve the technological problem of “how to precisely measure, quantify, and provide real-time feedback”, the examiner respectfully disagrees. As a first matter, as previously stated, the claims are directed towards guiding a user in mediation and/or wellness practice, which falls under a method of organizing human activity and is not a technological problem, as addressed above. Similarly, “precisely measure, quantify, and provide real-time feedback” is not a problem caused by any of the technological components argued and claimed by the application, such as the sensors or detectors, but rather, is a problem that existed and/or exists regardless of whether a sensors and detectors are involved in the process. At best, Applicant’s identified problem is a teaching or training problem and one that is organizing human activity. Because no technological problem is present, the claims do not provide a practical application. Applicant’s argument that the specific sensor hardware are not routine or conventional computer components, is not persuasive. As a first matter, it is noted that the newly added claims amendments are not supported in the originally field specification and are rejected as encompassing new matter. Furthermore, “at least one of a tactile sensor, an image sensor, or an audio sensor”, “at least one of a microphone, a sound sensor, or an audio sensor”, “an output interface including a speaker”, “a processor”, “communication interface” are all recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. Furthermore, Applicant’s own specification describes the invention being computerized software configured on a mobile, PC, or Tablet utilizing cameras, microphones, tactile, movement, breath flow sensors, sound/voice sensors, etc. Accordingly, the “at least one of a tactile sensor, an image sensor, or an audio sensor”, “at least one of a microphone, a sound sensor, or an audio sensor”, “an output interface including a speaker”, “a processor”, “communication interface” all nondescriptly automate user monitoring actions a person monitoring/guiding another through a meditation/wellness program perform and is quintessential to “do it on a computer’ and any purported specificity identified in the claim lies in the identified abstract idea (See, e.g., Univ. of Fla. Rsch. Found., Inc. v. Gen. Elec. Co., 916 F.3d 1363, 1367 (Fed. Cir. 2019) and SAP America, Inc. v., Investpic, LLC, 898 F.3d 1161, 1170 (Fed. Cir. 2018). As to claims 16, 22, and 27, Applicant similarly argues the claims amount to specific technological and/or algorithmic processing beyond simple human instructions, but the examiner, respectfully disagrees. As previously stated, the claims are directed to guiding/leading a user through meditation and/or a wellness program, which is a method of organizing human activity. Applicant is further reminded “certain method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II))” and, as recognized by the CAFC, “[t]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.” FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016) (quoting Bancorp Servs., L.L.C. v. Sun Life Assurance Co. of Can. (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012)). Because the amount of data analyzed as not material to the certain methods of organizing human activity abstract idea characterization, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 4-8, 10-13, 15-16, 26-29, and 31 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As to independent claim 4, the claim recites “a motion detector comprising at least one of a tactile sensor, and image sensor, or an audio sensor” and “a speech detector comprising at least one of a microphone, a sound sensor, or a voice sensor”. However, a review of the specification fails to disclose any particular “motion detector” much less that it is one of a tactile sensor, image sensor, or an audio sensor. Similarly, the specification fails to disclose any “speech detector” or any that are of a microphone, a sound sensor, or a voice sensor. As to dependent claim 5, the claim recites “wherein the breath flow sensor comprises a microphone”. However, a review of the specification fails to disclose any such breath flow sensor comprising a microphone. As to dependent claim 7, the claim recites “wherein the breath flow sensor comprises an image capturing element”. However, a review of the specification fails to disclose any such breath flow sensor comprising an image capturing element. As to dependent claim 8, the claim recites “wherein the motion detector comprises an audio receiver”. However, a review of the specification fails to disclose any such motion detector, much less one that comprises an audio receiver. As to dependent claim 10, the claim recites “wherein the motion detector comprises a tactile sensor”. However, a review of the specification fails to disclose any such motion detector, much less one that comprises a tactile sensor. As to dependent claim 12, the claim recites “wherein the motion detector comprises an image capturing element”. However, a review of the specification fails to disclose any such motion detector, much less one that comprises an image capturing element. As to independent claim 26, the claim recites “receiving, via an input interface, quantitative user inputs of the experiential cognitive perceptive and/or emotional intensities experience during carrying out of the selected exercise sequence” and “computing quantitative theme-based features…”. However, a review of the specification does not disclose the receiving of quantitative user inputs or the computing quantitative theme-based features. As to dependent claim 28, the claim recites “…a quantitative value reflecting a refinement of the user’s synchronization…”. However, a review of the specification fails to teach any such quantitative value. As to dependent claim 29, the claim recites “the questions relate to experiential intensity values relating to positive motivators and negative motivators in each experiential instance in at least two states…compute a distance measure for each of the experiences…enabling a user to select the experiential features on which the user should focus their work”. However, a review of the specification does not teach the questions relating to experiential intensity values relating to positive/negative motivators, computing a distance measure for each experiences, nor permitting a user to select the experiential features. The dependent claims 5-8, 10-13, 15-16, 27-29, and 31 fail to resolve the 112 deficiency of their parent claims and are similarly rejected. Appropriate correction is required. Applicant is reminded any amendments must be fully supported by the originally filed specification. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5, 7, 8, 16, 22-24, 26, 28, 29, and 31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. As to claim 5, the claim recites “the breath flow sensor comprises a microphone…”. However, it is unclear how the sensor comprises a microphone. While a sensor is commonly understood to be a device that detects change, it is unclear how the sensor, itself, comprises a microphone rather than using the microphone data as input for the sensor. Accordingly, the claim will be interpreted in a manner as best understood by the examiner such that where the prior art teaches a microphone for sending data to a flow sensor, then it reads upon the claimed limitation. As to claim 7, the claim recites “the breath flow sensor comprises an image capturing element…”. However, it is unclear how the sensor comprises an image capturing element. While a sensor is commonly understood to be a device that detects change, it is unclear how the sensor, itself, comprises an image capturing element rather than using the image capturing element data as input for the sensor. Accordingly, the claim will be interpreted in a manner as best understood by the examiner such that where the prior art teaches an image capturing element for sending data to a flow sensor, then it reads upon the claimed limitation. As to claim 8, the claim recites “the breath flow sensor comprises an audio receiver…”. However, it is unclear how the sensor comprises an audio receiver. While a sensor is commonly understood to be a device that detects change, it is unclear how the sensor, itself, comprises an audio receiver rather than using the image capturing element data as input for the sensor. Accordingly, the claim will be interpreted in a manner as best understood by the examiner such that where the prior art teaches an audio receiver for sending data to a flow sensor, then it reads upon the claimed limitation. As to claim 16, the claim recites “computationally quantify the subjective experiential intensity scores and/or relationships therebetween, including deriving at least one of a ratio, a contrast measure, and distance measure between different mental functions during carrying out of the exercise sequence”. However, the metes and bounds of the claim are unclear. As a first matter, Applicant’s use of the term “and/or” make it unclear as to whether or not both or either of the subjective experiential intensity scores or relationships therebetween are computationally quantified. Furthermore, it is unclear as to how the ratio, contrast measure and distance are determined. It is unclear if the ratio, contrast measure, and distance measure are each of the different mental functions or whether it’s just the distance measure between different mental functions. Furthermore, even if the ratio, contrast measure, and distance are each between different mental functions, it is further unclear as to what the different mental functions entail, much less how a ratio, contrast measure or distance is determined. Accordingly, the claim will be interpreted in a manner as best understood by the examiner such that where the prior art teaches some quantifiable difference between different mental functions, then it reads upon the claimed limitation. As to claim 22, the claim recites “computationally quantify the subjective experiential intensity scores by computing pairwise ratios of the quantitative scores of the experiential intensity values for the features of each of the dual pairs”. However, it is unclear as to what the experiential intensity values for the features of each of the dual pairs entails, much less how a quantitative score is determined. The metes and bounds of the claim cannot be determined, nor can the examiner make a reasonable interpretation to appropriately apply art. As to claims 23-24, the claims fail to resolve the 112 deficiency of their parent claim and are similarly rejected. As to claim 26, the claim recites “comparing the map recorded in the initial user session with the visual representation recorded in the subsequent user session…”. However, it is unclear as to what “map” the visual representation is being compared to. The claim will be interpreted in a manner as best understood by the examiner wherein the “map” is a “visual representation”. As to dependent claim 29, the claim recites “the questions relate to experiential intensity values relating to positive motivators and negative motivators in each experiential instance in at least two states…the received quantitative scores relate to the facilitating or inhibiting intensities of experiential features…compute a distance measure for each of the experiences…enabling a user to select the experiential features on which the user should focus their work”. However, it is unclear as to what each experiential instance in at least two states refers to. An experiential instance has not been previously described, much less “each experiential instance” of two states. It is further unclear as to the metes and bounds of the experiential features and the “distance measure for each of the experiences” entails. The claim will be interpreted in a manner as best understood by the examiner such that where the prior art teaches questions relating to a positive/increase or negative/decrease in an action, wherein a score/value is associated with the positive/negative, and the system provides various options for the user to select an activity that correlates to the positive/negative action, then it meets the claimed invention. Appropriate correction is required. Applicant is reminded any amendments must be fully supported by the originally filed specification. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 4-8, 10-13, 15-16, 18, 22-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 4-8, 10-13, 15-16, 18, 27-29 recite a system for guiding a user through an exercise sequence, which is within the statutory class of a machine. Claims 25 and 30 recites a method of for guiding a user through an exercise sequence, which is within the statutory category of a process. Claims 26 and 31 recites a method of for guiding a user through an exercise sequence, which is within the statutory category of a process. Claims are eligible for patent protection under § 101 if they are in one of the four statutory categories and not directed to a judicial exception to patentability. Alice Corp. v. CLS Bank Int'l, 573 U.S. ___ (2014). Claims 4-8, 10-13, 15-16, 18, 22-31, each considered as a whole and as an ordered combination, are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. MPEP 2106 Step 2A – Prong 1: The bolded limitations of: Claims 4, 25, and 26 (claim 4 being representative) (a) a breath flow sensor adapted to track a breathing pattern of the user; (b) a motion detector comprising at least one of a tactile sensor, an image sensor, or an audio sensor for tracking motion of the user; (c) a speech detector comprising at least one of a microphone, a sound sensor, or an audio sensor for capturing speech patterns of the user, the speech patterns reflecting a mental exercise of the user; (d) at least one output interface, the at least one output interface including a speaker adapted to provide to the user, during carrying out of the exercise sequence, at least one voice instructions and sounds reflecting a rhythm of the exercise sequence, the sounds being selected from beat sounds, rhythmic sounds, and musical sounds; (e) a timer; and (f) a processor, functionally associated with the breath flow sensor, the motion detector, the speech detector, the at least one output interface, the timer, and the communication interface, the processor operative, within a specific user session, to: i. select at least two exercises to be carried out synchronously by the user, the at least two exercises including exercises of at least two of the categories of breath exercises, body exercises, and mind exercises; ii. provide to the user, via the speaker, the sounds to which the at least two exercises are to be synchronized; iii. guide the user, via the at least one output interface, to carry out an exercise sequence including the selected at least two exercises at the provided sounds; iv. receive at least two inputs including at least two of a breathing pattern input from the breath flow sensor, a motion input from the motion detector, and a speech input from the speech detector, the at least two inputs reflecting the user's synchronous carrying out of the exercises; v. using the timer and based on the at least two inputs, quantify a degree of the user’s synchronization of at least two of breathing, motion, and speech during carrying out of the selected exercises; vi. record in a user profile associated with the user a quantification of a synchronization level achieved by the user during synchronous carrying out of the selected exercises; and vii. provide to the user, via the at least one output interface, a quantitative value reflecting the synchronization level achieved by the user during synchronous carrying out of the selected exercises. as presently drafted, under the broadest reasonable interpretation, covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for the recitation of generic computer components. For example, but for the noted computer elements, the claim encompasses a person following rules or instructions to perform synchronized exercises and receiving feedback thereof in the manner described in the abstract idea. The examiner further notes that “methods of organizing human activity” includes a person’s interaction with a computer (see October 2019 Update: Subject Matter Eligibility at Pg. 5). If the claim limitation, under its broadest reasonable interpretation, covers managing persona behavior or interactions between people but for the recitation of generic computer components, then it falls within the “method of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. MPEP 2106 Step 2A – Prong 2: This judicial exception is not integrated into a practical application because there are no meaningful limitations that transform the exception into a patent eligible application. The additional elements merely amount to instructions to apply the exception using generic computer components (“at least one of a tactile sensor, an image sensor, or an audio sensor”, “at least one of a microphone, a sound sensor, or an audio sensor”, “an output interface including a speaker”, “a processor”, “communication interface” —all recited at a high level of generality). Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. The “breath flow sensor” is not a generic computer component; however they is recited at a high levels of generality and similarly amount to generally linking the abstract idea to a particular technological environment. (See MPEP 2106.04(d)(1) indicating generally linking an abstract idea to a particular technological environment does not amount to integrating the abstract idea into a practical application). The claims only manipulate abstract data elements into another form. They do not set forth improvements to another technological field or the functioning of the computer itself and instead use computer elements as tools in a conventional way to improve the functioning of the abstract idea identified above. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. None of the additional elements recited "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)). At the levels of abstraction described above, the claims do not readily lend themselves to a finding that they are directed to a nonabstract idea. Therefore, the analysis proceeds to step 2B. See BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1349 (Fed. Cir. 2016) ("The Enfish claims, understood in light of their specific limitations, were unambiguously directed to an improvement in computer capabilities. Here, in contrast, the claims and their specific limitations do not readily lend themselves to a step-one finding that they are directed to a nonabstract idea. We therefore defer our consideration of the specific claim limitations’ narrowing effect for step two.") (citations omitted). MPEP 2106 Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons as presented in Step 2A Prong 2. Moreover, the additional elements recited are known and conventional generic computing elements (“at least one of a tactile sensor, an image sensor, or an audio sensor”, “at least one of a microphone, a sound sensor, or an audio sensor”, “an output interface including a speaker”, “a processor”, “communication interface” --- see Specification Pages 9,17 describing the various components as general purpose, common, standard, known to one of ordinary skill, and at a high level of generality, and in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy the statutory disclosure requirements). Therefore, these additional elements amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept that amounts to significantly more. See MPEP 2106.05(f). The Federal Circuit has recognized that "an invocation of already-available computers that are not themselves plausibly asserted to be an advance, for use in carrying out improved mathematical calculations, amounts to a recitation of what is 'well-understood, routine, [and] conventional.'" SAP Am., Inc. v. InvestPic, LLC, 890 F.3d 1016, 1023 (Fed. Cir. 2018) (alteration in original) (citing Mayo v. Prometheus, 566 U.S. 66, 73 (2012)). Apart from the instructions to implement the abstract idea, they only serve to perform well-understood functions (e.g., receiving, translating, and displaying data—see Specification above as well as Alice Corp.; Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307 (Fed. Cir. 2016); and Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015) covering the well-known nature of these computer functions). Furthermore, as discussed above, the additional element of a “breath flow sensor” is recited at high levels of generality and were determined to generally link the abstract idea into a particular technological environment or field of use. This additional element have been re-evaluated under step 2B and have also been found insufficient to provide significantly more. (See MPEP 2106.05(A) indicating generally linking an abstract idea to a particular technological environment does not amount to significantly more). Furthermore, the Background section of Applicant’s Specification (e.g., see page 9) indicates that the sensors are well-understood, routing, and conventional in the field. (See MPEP 2106.05(I)(A) indicating that well-understood, routine, and conventional activities cannot provide significantly more) Dependent Claims The limitations of dependent but for those addressed below merely set forth further refinements of the abstract idea without changing the analysis already presented. Claim 13 merely recites the type of motion, claims 15-16, 18 merely recite providing exercises based on a user profile, guiding a user using various themes based on user input/feedback, and claims 22-24 merely recite computationally quantifying scores and generating a plot, claim 27 merely recites quantifying synchronization through the use of plots, claim 29 recites providing an experience to a user based on positive and negative motivators, which covers a method of organizing human activity (i.e., managing personal behavior including following rules or instructions). Claims 5-8, 10-12 merely describe the type of sensors and output and are analyzed in the same manner as the sensors of the independent claims and which does not provide a practical application or amounts to significantly more for the same reasons detailed above. Claim 25 further recites a mathematical concept which is claimed in such a generalized manner that it also encompasses a person mentally performing the math, see MPEP § 2106.04(a)(2) as well, including that for a mental process Claims 16, 28, 30, 31 further refine the abstract idea described in the independent claim and further recite receiving input via an input device and/or providing output through an output device. These additional elements are considered to “apply it” under both the practical application and significantly more analysis, as detailed in the analysis above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-8, 10-12, 15, 25, 27, 28, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rosenblood (USPPN: 2020/0289036; hereinafter Rosenblood) in further view of Wang et al. (USPPN: 2022/0101748; hereinafter Wang) and Bardy et al (USPPN: 2015/0087922; hereinafter Bardy). As to claim 4, Rosenblood teaches A system for improving synchronization of breath exercises, body exercises, and mind exercises of a user (e.g., see Title, Abstract), the system comprising: (a) a breath flow sensor adapted to track a breathing pattern of the user (e.g., see [0065] teaching a sensor for detecting movement of a user, including breathing); (b) a motion detector comprising at least one of a tactile sensor, and image sensor, or an audio sensor for tracking motion of the user (e.g., see [0064], [0112] teaching a sensor for detecting motion of a user, such as through video images); (c) a speech detector comprising at least one of a microphone, a sound sensor, or a voice sensor for capturing speech patterns of the user, the speech patterns reflecting a mental exercise of the user (e.g., see [0071] teaching a microphone for detecting voice input. Notably, “the speech patterns reflecting a mental exercise of the user” is interpreted as an intended use. Applicant is remined that, typically, no patentable distinction is made by an intended use or result unless some structural difference is imposed by the use or result on the structure or material recited in the claim, or some manipulative difference is imposed by the use or result on the action recited in the claim. An intended use generally does not impart a patentable distinction if it merely states an intention or is a description of how the claimed apparatus is to be used. (See MPEP 2111.05)); (d) at least one output interface for providing output to the user, the at least one output interface including a speaker adapted to provide to the user, during carrying out of the exercise sequence, at least one of a voice instructions and sounds reflecting a rhythm of the exercise sequence, the sounds being selected from beat sounds, rhythmic sounds, and musical sounds (e.g., see Fig. 1, [0063] teaching a speaker for providing output. Notably, “during carrying out of the exercise sequence, at least one of a voice instructions and sounds reflecting a rhythm of the exercise sequence, the sounds being selected from beat sounds, rhythmic sounds, and musical sounds” is interpreted as an intended use. Applicant is remined that, typically, no patentable distinction is made by an intended use or result unless some structural difference is imposed by the use or result on the structure or material recited in the claim, or some manipulative difference is imposed by the use or result on the action recited in the claim. An intended use generally does not impart a patentable distinction if it merely states an intention or is a description of how the claimed apparatus is to be used. (See MPEP 2111.05). Accordingly, Rosenblood having taught a speaker for providing output, therefore, teaches the claimed limitation); (e) a timer (e.g., see Fig. 23, [0113] teaching a timer); and (f) a processor, functionally associated with the flow sensor, the motion detector, the speech detector, the at least one output interface, and the timer (e.g., see [0059] teaching a processor), the processor operative to: i. select at least two exercises to be carried out synchronously by the user, the at least two exercises including exercises of at least two of the categories of breath exercises, body exercises, and mind exercises (e.g., see [0109]-[0111], [0131] wherein two or more activities are selected to be performed simultaneously and in synchronicity, the activities including a physical activity and breathing activity); ii. provide to the user, via the speaker, the sounds to which the at least two exercises are to be synchronized (e.g., see [0109]-[0111], [0117], [0131], [0133], [0135], [0137] wherein the exercises are performed with guided pacing and movement and for a set length in each cycle, wherein a sound may be transmitted during the exercise to the breathing cycle); iii. guide the user, via the at least one output interface, to carry out an exercise sequence including the selected at least two exercises at the provided sounds (e.g., see [0109]-[0111], [0117] wherein the exercises are performed with guided pacing and movement and sounds); iv. receive at least two inputs including at least two of a breathing pattern input from the breath flow sensor, a motion input from the motion detector, and a speech input from the speech detector, the at least two inputs reflecting the user’s synchronous carrying out of the exercises; (e.g., see [0109], [0115], [0145]-[0147] wherein the physical activity and breathing activity are performed simultaneously and is tracked/monitored); v. using the timer and based on the at least two inputs, quantify a degree of the user's synchronization of at least two of breathing, motion, and speech during carrying out of the selected exercises (e.g., see Fig. 24, [0110], [0111], [0114], [0131], [0146] wherein a user is provided qualitative results and/or feedback on their performance, simultaneously, of two or more activities, including breathing and movement activities); vi. record in a user profile associated with the user a quantification of a synchronization level achieved by the user during synchronous carrying out of the selected exercises (e.g., see [0114], [0144] wherein a user performance of the simultaneous physical and breathing activities over time is maintained); and vii. provide to the user, via the at least one output interface, a quantitative value reflecting the synchronization level achieved by the user during synchronous carrying out of the selected exercises (e.g., see Fig. 24, [0109], [0114] displaying a qualitative results of the user’s performance of two or more activities done simultaneously). Rosenblood teaches a generating and showing a qualitative result of the user’s performance, wherein the performance is of two or more activities performed simultaneously, and therefore reads upon the claimed limitation. However, for the purposes of compact prosecution and in the same field of endeavor of monitoring and improving user attention, Wang teaches provide to the user, via the at least one output interface, a quantitative value reflecting the synchronization level achieved by the user during synchronous carrying out of the selected exercises (e.g., see [0015], [0034], [0038] teaching displaying a breathing-muscle force synchronization indicator which reflects the synchronization between a user’s breath and physical activity in real-time to provide timely feedback to the user). Accordingly, it would have been obvious to modify Rosenblood in view of Wang with a reasonable expectation of success. One would have been motivated to make the modification to help users quickly enter and stay in a desired attention state (e.g., see [0021] of Wang). While Rosenblood teaches a sensor for tracking the breathing of a user, for the purposes of compact prosecution and in the same field of endeavor of systems for monitoring a user, Bardy teaches a breath flow sensor adapted to track a breathing pattern of the user (e.g., see [0015], [0057] teaching a flow sensor for tracking the breathing pattern of a user). Accordingly, it would have been obvious to modify Rosenblood-Wang in view of Brady with a reasonable expectation of success. One would have been motivated to make the modification as a simple substitution of one known type of biometric sensor for another to yield the predictable results of detecting feedback from a plurality of sensors to determine a state a person is experiencing (e.g., see [0058] of Grace. See also See KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007); and MPEP 2143). As to claim 5, the rejection of claim 4 is incorporated. While Rosenblood teaches a plurality of different sensors, Rosenblood fails to teach wherein the breath flow sensor comprises a microphone adapted to measure a rhythmic airflow of breath cycles thereby to track the breathing pattern of the user However, in the same field of endeavor of monitoring a user, Brady teaches wherein the breath flow sensor comprises a microphone adapted to measure a rhythmic airflow of breath cycles thereby to track the breathing pattern of the user (See 112 rejection above. e.g., see [0057] wherein the flow sensor further includes a microphone). Accordingly, it would have been obvious to modify Rosenblood-Wang in view of Brady with a reasonable expectation of success. One would have been motivated to make the modification as a simple substitution of one known type of biometric sensor for another to yield the predictable results of detecting feedback from a plurality of sensors to determine a state a person is experiencing (e.g., see [0058] of Grace. See also See KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007); and MPEP 2143). As to claim 6, the rejection of claim 4 is incorporated. While Rosenblood teaches a plurality of different sensors, Rosenblood fails to teach wherein the breath detecting module comprises an air-flow sensor. However, in the same field of endeavor of monitoring a user, Brady teaches wherein the breath detecting module comprises an air-flow sensor (e.g., see [0057] teaching the use of air flow sensors). Accordingly, it would have been obvious to modify Rosenblood-Wang in view of Brady with a reasonable expectation of success. One would have been motivated to make the modification as a simple substitution of one known type of biometric sensor for another to yield the predictable results of detecting feedback from a plurality of sensors to determine a state a person is experiencing (e.g., see [0058] of Grace. See also See KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007); and MPEP 2143). As to claim 7, the rejection of claim 4 is incorporated. Rosenblood further teaches wherein the breath flow sensor comprises an image capturing element adapted to capture images of the user, and wherein the processor is configured to identify, in the captured images, changes to the body of the user reflecting the breathing pattern of the user (see 112 rejection above. e.g., see Fig. 23, [0109], [0113] teaching the use of cameras to track a user’s activity and breathing). As to claim 8, the rejection of claim 4 is incorporated. Rosenblood further teaches wherein the motion detector comprises an audio receiver adapted to detect a sound there is a sound generated by the motion (see 112 rejection. e.g., see [0065], [0071] teaching a plurality of sensors for detecting motion including breathing, wherein a microphone can further be used to detect breathing. Notably, “a sound generated by the motion” is interpreted as an intended use. Applicant is remined that, typically, no patentable distinction is made by an intended use or result unless some structural difference is imposed by the use or result on the structure or material recited in the claim, or some manipulative difference is imposed by the use or result on the action recited in the claim. An intended use generally does not impart a patentable distinction if it merely states an intention or is a description of how the claimed apparatus is to be used. (See MPEP 2111.05). Accordingly, Rosenblood having taught a sensor for detecting sound, therefore, teaches the claimed limitation). As to claim 10, the rejection of claim 4 is incorporated. While Rosenblood teaching a touch-enabled device (e.g., see Fig. 4, [0074], [0083] teaching a touchscreen display), Rosenblood fails to explicitly teach wherein the motion detecting module comprises a tactile sensor adapted to sense the motion carried out by the user. However, in the same field of endeavor of guiding a user towards an optimal state, Wang teaches wherein the motion detecting module comprises a tactile sensor adapted to sense the motion carried out by the user (e.g., see [0021] teaching tactile receptors for detecting user actions). Accordingly, it would have been obvious to modify Rosenblood in view of Wang with a reasonable expectation of success. One would have been motivated to make the modification as a simple substitution of one known type of biometric sensor for another to yield the predictable results of detecting feedback from a plurality of sensors to determine a state a person is experiencing (e.g., see [0058] of Grace. See also See KSR Int’l v. Teleflex Inc., 127 S. Ct. 1727, 1740-41, 82 USPQ2d 1385, 1396 (2007); and MPEP 2143). As to claim 11, the rejection of claim 10 is incorporated. Rosenblood-Wang further teaches wherein the tactile sensor comprises a touchpad or a touchscreen of a computing device (e.g., see Fig. 4, [0074], [0083] of Rosenblood teaching a touchscreen display). As to claim 12, the rejection of claim 4 is incorporated. Rosenblood further teaches wherein the motion detector comprises an image capturing element adapted to capture images of the user, and wherein the processor is configured to identify, in the captured images, motions of the user (e.g., see Fig. 23, [0109], [0113] teaching the use of cameras to track a user’s activity and breathing). As to claim 15, the rejection of claim 4 is incorporated. Rosenblood further teaches wherein the processor is configured to select the at least two exercises based on information stored in the user profile with respect to previous exercise sequences carried out by the user (e.g., see Fig. 11, [0096]-[0099] wherein exercises may be proposed and custom to a user to improve their posture, stretching, and deep breathing). As to claim 25, the claim is directed to the similar method implemented by the system of claim 4 and further recites b) at a later time, carrying out a subsequent user session by repeating steps i to viI (e.g., see [0146] of Rosenberg wherein the activity can be performed at a later time); and c) comparing the quantified degree of the user’s synchronization recorded in the initial user session with the quantified degree of the user’s synchronization recorded in the subsequent user session to quantify an improvement in the synchronization of the user while carrying out selected exercise sequences (e.g., see [0096], [0144]-[0148] of Rosenberg wherein the user progress is further tracked by comparing their later performance with a previous recording) and is similarly rejected for the reasons outlined above; and providing to the user, via the at least one output interface, an output reflecting the quantified improvement in the synchronization (e.g., see Fig. 11, [0096] of Rosenberg wherein the dashboard displays a user’s progress) As to claim 27, the rejection of claim 4 is incorporated. Rosenblood fails to teach wherein the processor is adapted to quantify the degree of the user’s synchronization during carrying out of the exercise sequence by generating plots of a timing of each of the provided sounds, the user carrying out each of the selected exercises, and the user’s inhalations and exhalations during carrying out of each of the selected exercises, and comparing the plots to each other to quantify the degree of synchronization between the plots. However, for the purposes of compact prosecution and in the same field of endeavor of monitoring and improving user attention, Wang teaches wherein the processor is adapted to quantify the degree of the user’s synchronization during carrying out of the exercise sequence by generating plots of a timing of each of the provided sounds, the user carrying out each of the selected exercises, and the user’s inhalations and exhalations during carrying out of each of the selected exercises, and comparing the plots to each other to quantify the degree of synchronization between the plots (e.g., see Fig. 4, [0043]-[0059] teaching displaying a breathing-muscle force synchronization indicator which reflects the synchronization between a user’s breath and physical activity in real-time to provide timely feedback to the user, wherein the degree of synchronization is evaluated using plotted values of the breathing exercise and physical action). Accordingly, it would have been obvious to modify Rosenblood in view of Wang with a reasonable expectation of success. One would have been motivated to make the modification to help users quickly enter and stay in a desired attention state (e.g., see [0021] of Wang). As to claim 28, the rejection of claim 4 is incorporated. Rosenberg fails to teach when providing the quantitative value, to provide to the user, via the at least one output interface, a quantitative value reflecting a refinement of the user’s synchronization during carrying out of the exercise sequence across the multiple progressive user sessions of user exercise (see 112 rejections above). However, for the purposes of compact prosecution and in the same field of endeavor of monitoring and improving user attention, Wang teaches when providing the quantitative value, to provide to the user, via the at least one output interface, a quantitative value reflecting a refinement of the user’s synchronization during carrying out of the exercise sequence across the multiple progressive user sessions of user exercise (see 112 rejections above. e.g., see [0019], [0021], [0038] wherein a user is provided real-time feedback of their process, including an indicator if they have a low level of attention, wherein no longer receiving a low-level indicator, would obviously, if not necessarily reflect an improvement). Accordingly, it would have been obvious to modify Rosenblood in view of Wang with a reasonable expectation of success. One would have been motivated to make the modification to help users quickly enter and stay in a desired attention state (e.g., see [0021] of Wang). As to claim 30, the rejection of claim 25 is incorporated. Rosenberg further teaches wherein the carrying out of the initial user session further includes providing to the user, via the at least one output interface, a quantitative value reflecting the synchronization level (e.g., see Fig. 24, [0109], [0114] displaying a qualitative results of the user’s performance of two or mo
Read full office action

Prosecution Timeline

Apr 05, 2023
Application Filed
May 06, 2025
Non-Final Rejection — §101, §103, §112
Aug 06, 2025
Response Filed
Nov 07, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488881
SYSTEM METHOD AND NETWORK FOR EVALUATING THE PROGRESS OF A MANAGED CARE ORGANIZATION PATIENT WELLNESS GOALS
2y 5m to grant Granted Dec 02, 2025
Patent 12367987
TECHNOLOGIES FOR MANAGING CAREGIVER CALL REQUESTS VIA SHORT MESSAGE SERVICE
2y 5m to grant Granted Jul 22, 2025
Patent 12341851
SYSTEMS, METHODS, AND SOFTWARE FOR ACCESSING AND DISPLAYING DATA FROM IMPLANTED MEDICAL DEVICES
2y 5m to grant Granted Jun 24, 2025
Patent 12327642
SYSTEM AND METHOD FOR PROVIDING TELEHEALTH SERVICES USING TOUCHLESS VITALS AND AI-OPTIMIZED ASSESSMENT IN REAL-TIME
2y 5m to grant Granted Jun 10, 2025
Patent 12237089
ONLINE MONITORING OF CLINICAL DATA DRIFTS
2y 5m to grant Granted Feb 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
39%
Grant Probability
73%
With Interview (+34.1%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 352 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month