DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claim(s) 19-20 is/are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 4 December 2025.
Applicant’s election without traverse of claim(s) 1-18 in the reply filed on 4 December 2025 is acknowledged.
Information Disclosure Statement
The information disclosure statement filed 12 September 2025 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. The Examiner notes that each of NPLs numbered 5 [Office Action dated August 8, 2025, US Application No. 17/890,971], 21 [CIOBANU, Madalina, et al.], and 22 [MAHARJAN, Jenish, et al.] are cited in the IDS, but were not filed in the application file wrapper. The Examiner also notes that an additional NPL [Office Action dated August 8, 2025, US Application 17/955,616] was filed in the application file wrapper on 12 September 2025, but was not cited in the corresponding IDS dated 12 September 2025. The IDS has been placed in the application file, but the information referred to therein has not been considered [see annotations on IDS dated 12 September 2025 for NPLs not considered].
Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: “interface 730” in ¶0081.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: “1”, “2”, “3”, “4”, “5”, “6”, “7”, “8”, “9” in Fig. 6.
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim(s) 1, 5-7, 10, 13-14, and 16 objected to because of the following informalities:
Claim 1 should read “exceeds” [lines 10, 11, 12, 18, 18, 19, each instance].
Claim 5 should read “comprises” [line 1].
Claim 6 should read “comprises” [line 1].
Claim 6 should read “one [[of]] or more electromyogram (EMG) features” [lines 3-4].
Claim 6 should read “exceeds” [line 6].
Claim 7 should read “comprises” [line 1].
Claim 7 should read “exceeds” [line 4].
Claim 10 should read “based on the s the target data and/or [[the]] a magnitude of difference between the sensor data of the subject and the target data” [line 4].
Claim 13 should read “The method of claim[[s]] 12” [line 1].
Claim 14 should read “exceeds” [lines 9, 14, each instance].
Claim 16 should read “exceeds” [line 9].
Appropriate correction is required.
Claim Interpretation
Examiner Notes: currently, NO limitation invokes interpretation under § 112(f).
Claim Rejections - 35 USC § 112
Examiner’s Note Regarding Machine Learning: the claimed “model that is trained with sensor data from the subject and/or with sensor data from at least one additional subject” of claim(s) 11 and those dependent therefrom and the claimed “at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject” of claim(s) 16 was considered under § 112(a), wherein the Examiner notes that the disclosure of machine learning [Applicant’s Specification ¶¶0068-0075] of the Applicant’s Specification is considered to provide sufficient written description support for the trained model/machine learning model as presently claimed for one of ordinary skill in the art to understand that the Applicant possessed the instant invention at the time of filing.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 16 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 16 recites the limitation “an audio sound track, wherein the audio sound track is familiar to the subject” [lines 12-13], which is considered indefinite, as it is not clear whether the recited limitation is meant to define a new audio sound track from the previously defined audio sound track that is familiar to the subject of claim 14 [lines 11-13], from which claim 16 depends; or whether the recited limitation is meant to refer to the previously defined audio sound track that is familiar to the subject of claim 14 [lines 11-13]. For examination purposes, the Examiner has interpreted either identified interpretation to be applicable in light of any prior art applied under § 102 or § 103, wherein any subsequent recitation of “the audio sound track” is similarly interpreted.
Examiner’s Note Regarding Subjective and Relative Terminology: The Examiner notes that the claims recite that the audio track is “familiar to the subject” [see claims 1, 10, 14, 16], wherein the Examiner notes that “familiar” is considered to be a relative term. However, the Examiner notes that the Applicant’s Specification is considered to provide a standard for measuring a degree of “familiarity” [The audio sound data, for example, audio sound data that is "familiar" to the subject, may be particularly associated with the subject, for example, based upon a determination that the audio sound data is effective to elicit a response (e.g., a physiological response) in the subject, for instance, a determination that the audio sound data (e.g., the audio sound track) is effective to prevent onset of a meltdown stage for the subject or decrease severity of a meltdown stage for the subject… The audio sound track is familiar to the subject, e.g., the subject has heard the audio sound track previously on more than one occasion, and the audio sound track has consistently (e.g., on two or more distinct occasions) displayed a calming or soothing effect (as opposed to an agitating effect) on the subject (Applicant’s Specification ¶0033]. As such, the use of “familiar” is NOT considered to render any claim indefinite.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-18 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Each claim has been analyzed to determine whether it is directed to any judicial exceptions.
Representative claim(s) 1 [representing all independent claims] recite(s):
A method comprising:
(a) acquiring sensor data from one or more wearable sensors configured to be worn by a subject, wherein the sensor data comprise motion data, sound data, physiological data, or combinations thereof of the subject;
(b) comparing the sensor data of the subject with target data, wherein the motion data, the sound data, the physiological data, or combinations thereof of the subject are compared with target motion data, target sound data, target physiological data, or combinations thereof, respectively;
(c) determining, in any sequence, at least one of the following: that the motion data of the subject is equal to or exceed the target motion data; that the sound data of the subject is equal to or exceed the target sound data; and that the physiological data of the subject is equal to or exceed the target physiological data; and
(d) responsive to step (c), delivering audible sound therapy to the subject, wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is characterized by a track rhythm and by a track beat, wherein the audio sound track is familiar to the subject, and wherein the audio sound track is repeated at least until it is determined, in any sequence, at least one of the following: that the target motion data exceed the motion data of the subject; that the target sound data exceed the sound data of the subject; and that the target physiological data exceed the physiological data of the subject.
(Emphasis added: abstract idea, additional element)
Step 2A Prong 1
Representative claim(s) 1 recites the following abstract ideas, which may be performed in the mind or by hand with the assistance of pen and paper:
“(a) acquiring sensor data from one or more wearable sensors configured to be worn by a subject, wherein the sensor data comprise motion data, sound data, physiological data, or combinations thereof of the subject” – may be performed by merely observing known or previously collected data; the Examiner further notes that the limitation as recited is not considered to be a positive recitation of any step of data gathering using any sensors, as the recitation of the acquiring being “from one or more wearable sensors configured to be worn by a subject” and the sensor data comprising “motion data, sound data, physiological data, or combinations thereof of the subject” merely characterizes the type of data observed/previously collected
“(b) comparing the sensor data of the subject with target data, wherein the motion data, the sound data, the physiological data, or combinations thereof of the subject are compared with target motion data, target sound data, target physiological data, or combinations thereof, respectively” – may be performed by merely observing known or previously collected data and drawing mental conclusions therefrom based on known or previously collected data for comparison
“(c) determining, in any sequence, at least one of the following: that the motion data of the subject is equal to or exceed the target motion data; that the sound data of the subject is equal to or exceed the target sound data; and that the physiological data of the subject is equal to or exceed the target physiological data” – may be performed by merely observing known or previously collected data and drawing mental conclusions therefrom based on known or previously collected data for comparison
“(d) responsive to step (c), delivering audible sound therapy to the subject, wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is characterized by a track rhythm and by a track beat, wherein the audio sound track is familiar to the subject, and wherein the audio sound track is repeated at least until it is determined” – may be considered a method of organizing human activity relating to managing personal behavior or relationships or interactions between people, by merely verbally communicating any sound, as the Examiner notes that all sounds may be considered to be defined by a rhythm and beat, and the recitation of the sound being “familiar” is considered to be subjective, such that any sound may be “familiar”, or may be considered an instruction for someone to verbally communicate any sound; the Examiner notes that the recitation that “(d) responsive to step (c)” merely limits when the abstract idea may be performed, and wherein the recitation of the audio sound track being repeated until a determination is made merely defines performing the abstract idea for a predetermined amount of time; however for the sake of compact prosecution, the Examiner notes that the identified limitation may instead be interpreted as an additional element and is analyzed at Step 2A Prong 2 and Step 2B below, in the alternative
“it is determined, in any sequence, at least one of the following: that the target motion data exceed the motion data of the subject; that the target sound data exceed the sound data of the subject; and that the target physiological data exceed the physiological data of the subject” – may be performed by merely observing known or previously collected data and drawing mental conclusions therefrom based on known or previously collected data for comparison
If a claim, under BRI, covers performance of the limitations in the mind but for the mere recitation of extra-solutionary activity (and otherwise generic computer elements) then the claim falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1 of the Mayo framework as set forth in the 2019 PEG.
No limitations are provided that would force the complexity of any of the identified evaluation steps to be non-performable by pen-and-paper practice.
The dependent claims merely include limitations that either further define the abstract idea [e.g. limitations relating to the data gathered or particular steps which are entirely embodied in the mental process] and amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they are merely incidental or token additions to the claims that do not alter or affect how the process steps are performed.
Thus, these concepts are similar to court decisions of abstract ideas of itself: collecting, displaying, and manipulating data [Int. Ventures v. Cap One Financial], collecting information, analyzing it, and displaying certain results of the collection and analysis [Electric Power Group], collection, storage, and recognition of data [Smart Systems Innovations].
Step 2A Prong 2
The judicial exception is not integrated into a practical application.
Representative claim 1 only recites additional elements of extra-solutionary activity – in particular, extra-solution activity [generic computer function of delivering an audio signal of an audible sound therapy (wherein the Examiner notes that the recitation of the audible sound being “therapy” is not considered to limit the audible sound itself) via a generic speaker] – without further sufficient detail that would tie the abstract portions of the claim into a specific practical application (2019 PEG p. 55 – the instant claim, for example does not tie into a particular machine, a sufficiently particular form of data or signal collection – via the claimed generic computer function, or a sufficiently particular form of display or computing architecture/structure).
Dependent claim(s) 2, 4-7, 10, 12, 15, and 18 merely add detail to the abstract portions of the claim but do not otherwise encompass any additional elements which tie the claim(s) into a particular application/integration [the dependent claim(s) recite generic ‘units’ or ‘steps’ which encompass mere computer instructions to carry out an otherwise wholly abstract idea].
Dependent claim(s) 8-9 and 13 encounter substantially the same issues as the independent claim(s) from which they depend in that they encompass further generic extra-solutionary activity [generic data gathering] and/or generic computer elements [storage, memory per se].
Accordingly, the claim(s) are not integrated into a practical application under Step 2A Prong 2.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Independent claims 1 and 14 as individual wholes fail to amount to significantly more than the judicial exception at Step 2B. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of extra-solutionary activity [i.e., generic computer function] and generic computer elements cannot amount to significantly more than an abstract idea [MPEP § 2106.05(f)] and is further considered to merely implement an abstract idea on a generic computer [MPEP § 2106.05(d)(II) establishes computer-based elements which are considered to be well understood, routine, and conventional when recited at a high level of generality].
For the independent claim portions and dependent claims which provide additional elements of extra-solutionary data gathering, MPEP § 2106.05(g) establishes that mere data gathering for determining a result does not amount to significantly more. The extra-solutionary activity of processor steps processor steps [acquiring, storing, transmitting signals, etc.] as presently recited, cannot provide an inventive concept which amounts to significantly more than the recited abstract idea.
For the independent claims as well as the dependent claims merely reciting generic computer elements and functions [computer elements of a processor, controller, speaker recited at a high level of generality, and functions therein], MPEP § 2106.05(d)(II) establishes computer-based elements which are considered to be well understood, routine, and conventional when recited at a high level of generality.
Accordingly, the generic computer elements and functions therein, as presently limited, cannot provide an inventive concept since they fall under a generic structure and/or function that does not add a meaningful additional feature to the judicial exception(s) of the claim(s).
Claim 11 recites “a model that is trained with sensor data from the subject and/or with sensor data from at least one additional subject; wherein the model is configured to evaluate the sensor data of the subject with respect to the target data to determine the presence or absence of a pre-meltdown stage for the subject” and claim 16 recites “at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject, wherein the at least one machine learning model is configured to evaluate the sensor data of the subject with respect to target data… provide, by the control system, the input to the at least one machine learning model; determine, by the control system, an evaluation result”. Such a “model that is trained” and “machine learning model” is considered well-understood, routine, and conventional, as known by at least:
Hu (“Intelligent Sensor Networks”, NPL attached) [In supervised learning, the learner is provided with labeled input data. This data contains a sequence of input/output pairs of the form xi, yi, where xi is a possible input and yi is the correctly labeled output associated with it. The aim of the learner in supervised learning is to learn the mapping from inputs to outputs. The learning program is expected to learn a function f that accounts for the input/output pairs seen so far, f (xi) = yi, for all i. This function f is called a classifier if the output is discrete and a regression function if the output is continuous. The job of the classifier/regression function is to correctly predict the outputs of inputs it has not seen before (Hu, Page 5)]
Huang (“Kernel Based Algorithms for Mining Huge Data Sets”, NPL attached) [In supervised learning, the learner is provided with labeled input data. This data contains a sequence of input/output pairs of the form xi, yi, where xi is a possible input and yi is the correctly labeled output associated with it. The aim of the learner in supervised learning is to learn the mapping from inputs to outputs. The learning program is expected to learn a function f that accounts for the input/output pairs seen so far, f (xi) = yi, for all i. This function f is called a classifier if the output is discrete and a regression function if the output is continuous. The job of the classifier/regression function is to correctly predict the outputs of inputs it has not seen before (Huang, Page 1)]
Mitchell (“The Discipline of Machine Learning”, NPL attached) [For example, we now have a variety of algorithms for supervised learning of classification and regression functions; that is, for learning some initially unknown function f : X [Calibri font/0xE0] Y given a set of labeled training examples {xi; yi} of inputs xi and outputs yi = f(xi) (Mitchell, Pages 3-4)]
Claim 14 recites “one or more wearable sensors configured to detect sensor data of a subject; wherein the one or more wearable sensors comprise at least one sensor configured to detect motion data, at least one sensor configured to detect sound data, at least one sensor configured to detect physiological data, or combinations thereof”, wherein the Examiner notes that claims 1, 8, and 11 fail to positively recite the “one or more wearable sensors”, however for the sake of compact prosecution, the analysis below is considered to be applicable. Such an “audio speaker, earbuds, and/or headphones” is considered well-understood, routine, and conventional, as known by at least:
Applicant’s disclosure is not particular regarding the particular structure of the generically claimed “one or more wearable sensors”, and recites the “one or more wearable sensors” at a high level of generality [In an aspect, the wearable sensor comprises a wearable motion sensor, wherein the wearable motion sensor is configured to acquire motion data of the subject. The wearable motion sensor can comprise a motion sensing unit. The motion sensing unit may comprise a micro-electro- mechanical system (MEMS) based motion sensor, a gyroscope, an accelerometer, a magnetometer, a distance measurement sensor, an absolute position sensor (e.g., a trilateration device), and the like, or combinations thereof. In an aspect, the wearable sensor comprises a wearable sound sensor, wherein the wearable sound sensor is configured to acquire sound data of the subject. The wearable sound sensor can comprise a microphone, and optionally an amplifier. In an aspect, the wearable sensor comprises a wearable physiological sensor, wherein the wearable physiological sensor is configured to acquire physiological data of the subject. The wearable physiological sensor can comprise a pulse oximeter, a piezoelectric pressure sensor, a radio frequency identification (RFID) sensor, and the like, or combinations thereof (Applicant’s Specification ¶¶0022-0024)]. This lack of disclosure is acceptable under 35 U.S.C. 112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the medical technology arts. Thus, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the field of auditory delivery devices. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional element because it describes such an additional element in a manner that indicates that the additional element is sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. 112(a) [see Berkheimer memo from April 19, 2018, Page 3, (III)(A)(1), not attached]. Adding hardware that performs “well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible [TLI Communications].
Zhao (US-20160113569-A1) [As would be understood by one of skill in the art, the “smart” wearable user devices may include processing systems, memory systems, communication systems, sensor systems, and/or any other devices or systems known in the art that allow those wearable user devices to collect the user health data and communicate it as discussed below. For example, the “smart” glasses 202 may collect audio data, video data (i.e., from the point of view of the first user 200, of the first user 200 via an eye or face facing camera, etc.), user movement (e.g., head movement) data, brainwave data, temperature data, breathing data, and/or any other user data known in the art that is collectable by “smart” glasses. Similarly, the “smart” watch 204 may collect user movement (e.g., arm and hand movement) data, pulse data, temperature data, and/or any other user data known in the art that is collectable by “smart” watches, the “smart” ring 206 may collect user movement (e.g., arm, hand, and finger movement) data, pulse data, temperature data, and/or any other user data known in the art that is collectable by “smart” rings, and the “smart” shoes 206 may collect user movement (e.g., foot and leg movement such as walking/running movements) data, pulse data, temperature data, and/or any other user data known in the art that is collectable by “smart” shoes (Zhao ¶0027)]
Claim 17 recites “an audio speaker, earbuds, and/or headphones, wherein the audible sound therapy is delivered via the audio speaker, earbuds, and/or headphones without requiring assistance from a caregiver”. Such an “audio speaker, earbuds, and/or headphones” is considered well-understood, routine, and conventional, as known by at least:
Applicant’s disclosure is not particular regarding the particular structure of the generically claimed “audio speaker, earbuds, and/or headphones”, and recites the “audio speaker, earbuds, and/or headphones” at a high level of generality [the device (e.g., wearable item or phone) can comprise speakers for delivering the audible sound therapy. Furthermore, the device (e.g., wearable item or phone) can be connected (e.g., wired or wireless connection) to headphones, earbuds, a speaker, a smart- speaker, etc. (Applicant’s Specification ¶0027)]. This lack of disclosure is acceptable under 35 U.S.C. 112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the medical technology arts. Thus, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the field of auditory delivery devices. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional element because it describes such an additional element in a manner that indicates that the additional element is sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. 112(a) [see Berkheimer memo from April 19, 2018, Page 3, (III)(A)(1), not attached]. Adding hardware that performs “well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible [TLI Communications].
Li (US-20190356989-A1) [a speaker (e.g., headphones, internal speakers of the signal processing device). Additionally or alternatively, the system interfaces (e.g., via wireless protocols such as BLUETOOTH or BLUETOOTH LOW ENERGY) with a hearing assistance device to execute Blocks of the method S100. As used herein, a “hearing assistance device” can include a hearing aid, a wearable hearing-related device (a “hearable” device), earphones/headphones in coordination with an integrated microphone, or any other device capable of augmenting incoming sound (Li ¶0011); the system can increase volumes of discrete frequency ranges (e.g., by 10 decibels) in discrete intervals (e.g., every 50 Hz) across the audible spectrum or across the vocal spectrum in a series of soundbites and upload original and modified versions of these soundbites to the hearing assistance device (Li ¶0118)]
Lee (US-20100137739-A1) [the controlling unit 100 may retrieve and sequentially output the sound sources of a test sound set from the test sound storage unit 102 through the test sound output unit 104 and the speaker 106 (Lee ¶0063); The auditory threshold is a value which is estimated as a subject's hearable minimum volume within any frequency band (Lee ¶0066); Herein, sound sources included in one test sound set may have the same frequency band, and sequential volume levels with a predetermined difference (Lee ¶0067)]
Ganter (US-20120230501-A1) [The audio output means may comprise one or more of: speakers and headphones (Ganter ¶0028); It consists of a set of discrete frequency values measured in Hertz (Hz) and a related set of threshold sensitivity values measured in decibels (dB) (Ganter ¶0083)]
Examiner’s Note Regarding Particular Treatment or Prophylaxis: Claim(s) 1 and 14 recite subject matter regarding “delivering audible sound therapy to the subject, wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is characterized by a track rhythm and by a track beat, wherein the audio sound track is familiar to the subject, and wherein the audio sound track is repeated…”, wherein claim 3 clarifies that the delivery of audible sound therapy to the subject “prevents the onset of a meltdown stage for the subject or decreases the severity of a meltdown stage for the subject” and claim 4 further limits the audio sound track to comprise “a song, a music album, an audiobook chapter, an audio book, a recited poem, a collection of recited poems, or combinations thereof”, which the Examiner notes is considered NOT to be a particular treatment or prophylaxis, as none of the identified claims positively recite or include language that is considered to be a particular treatment or prophylaxis as an additional element to integrate the judicial exception into a practical application or allow the identified claims to amount to significantly more than the judicial exception [MPEP § 2106.04(d)(2)], as the Examiner notes that the identified limitations are considered to refer to an abstract idea [see Step 2A Prong 1 analysis above]. However, for the sake of compact prosecution, the Examiner has also analyzed step (d) of claim 1 under Step 2A Prong 2 and Step 2B, wherein the Examiner notes that the identified limitations are still considered NOT to be a particular treatment or prophylaxis. Regarding the particularity or generality of the treatment or prophylaxis, the Examiner notes that the step of delivering the audible sound therapy to the subject is not considered to be sufficiently particular is merely considered to refer to mere instructions to “apply” the exception in a generic way, as the audible sound therapy is merely defined as an audio track characterized by a track rhythm and by a track beat, claim 4 further “limits” the audio track to comprise “a song, a music album, an audiobook chapter, an audio book, a recited poem, a collection of recited poems, or combinations thereof”, which is considered to define the sound track at a high level of generality and generically refer to any song, audiobook, or recited poem [Conversely, consider a claim that recites the same abstract idea and "administering a suitable medication to a patient." This administration step is not particular, and is instead merely instructions to "apply" the exception in a generic way (MPEP § 2106.04(d)(2)(a)]; furthermore, the recitation of claim 3 wherein the audible sound therapy “prevents the onset of a meltdown stage for the subject or decreases the severity of a meltdown stage for the subject” is considered to be an equivalent of “apply it”. Regarding whether the identified limitations of claims 1, 3-4, and 14 are merely extra-solution activity or a field of use, the Examiner notes that the step of “delivering audible sound therapy to the subject… wherein the audio sound track is repeated until it is determined, in any sequence, at least one of the following: that the target motion data exceed the motion data of the subject; that the target sound data exceed the sound data of the subject; and that the target physiological data exceed the physiological data of the subject” [emphasis applied] is considered to define a pre-solution activity, as the delivery of the audible sound therapy is performed in order to gather data for a mental analysis step [see Step 2A Prong 1 analysis above], and is a necessary precursor for all uses of the recited exception [this administration is performed in order to gather data for the mental analysis step, and is a necessary precursor for all uses of the recited exception. It is thus extra-solution activity, and does not integrate the judicial exception into a practical application (MPEP § 2106.04(d)(2)(c)].
Accordingly, the claim(s) as whole(s) fail amount to significantly more than the judicial exception under Step 2B.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-10, 14-15, and 17-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ganesh (US-10231664-B2, cited by Applicant).
Regarding claim 1, Ganesh teaches
A method comprising:
(a) acquiring sensor data from one or more wearable sensors configured to be worn by a subject, wherein the sensor data comprise motion data, sound data, physiological data, or combinations thereof of the subject [The signals of pre-autistic meltdown may be gathered through at least two sets of sensor systems:… Category B: Sensors 103 that detect physiological stress symptoms, comprising accelerometers that may detect restlessness, galvanic skin response sensors that may detect perspiration levels, flex resistors that may detect muscle tension, pulse oximetry sensors that detect various types of breathing patterns, including hypoventilation, when the patient is breathing room air, microphone that detects patient's audible frequency and vocal patterns (Ganesh Col 5:20-36)];
(b) comparing the sensor data of the subject with target data, wherein the motion data, the sound data, the physiological data, or combinations thereof of the subject are compared with target motion data, target sound data, target physiological data, or combinations thereof, respectively [Periodically, the readings from the environmental sensor system 102 and the physiological sensor system 103, are monitored by the microprocessor 410 and compared with the sensors' corresponding threshold values 515… The threshold values indicate the normal non-meltdown range of the sensors specific to each individual patient (Ganesh Col 6:11-15, 27-30)];
(c) determining, in any sequence, at least one of the following: that the motion data of the subject is equal to or exceed the target motion data; that the sound data of the subject is equal to or exceed the target sound data; and that the physiological data of the subject is equal to or exceed the target physiological data [Ganesh Col 6:11-15, 27-30]; and
(d) responsive to step (c), delivering audible sound therapy to the subject, wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is characterized by a track rhythm and by a track beat, wherein the audio sound track is familiar to the subject [When the readings from the sensors cross the thresholds, the response initiated by microprocessor 410 comprises of the following: Calming response to the patient is implemented through the calming response module 104 (Ganesh Col 6:30-34); The actions that are dynamically controlled by the calming response module 104 include (example embodiments are shown in FIG. 2, FIG. 3, and FIG. 4)… playing favorite music, playing a discrete gentle audible alert (Ganesh Col 6:42-48)], and wherein the audio sound track is repeated at least until it is determined, in any sequence, at least one of the following: that the target motion data exceed the motion data of the subject; that the target sound data exceed the sound data of the subject; and that the target physiological data exceed the physiological data of the subject [The therapeutic calming response characteristics of the calming response module 104, such as duration and intensity of the responses, can be controlled by the ‘configure the calming response module 517’ that is located on the caregivers' mobile device 521 (Ganesh Col 6:51-55); After the first iteration 702 of polling and recording the sensor, location and real time clock information into the storage module, the process is repeated periodically until any sensor value exceeds its corresponding threshold 703. If the latter occurs,… Independent of an active connection between the wearable module and the caregiver's mobile device, if the wearable device has been pre-configured to deliver therapy 706, then corresponding therapy is delivered to the patient… This sequence of steps repeat going back to the collection of the sensor, location, and real time clock information 702 (Ganesh Col 8:26-41, Fig. 10), wherein the loop depicted in Fig. 10 of Ganesh is considered to maintain the audible sound therapy so long as the sensor thresholds (target data) are exceeded, such that when the sensor thresholds are not exceeded, the therapy is not activated in the next iteration of the loop].
Regarding claim 2, Ganesh teaches
The method of claim 1, wherein the subject has a neurodevelopmental disorder comprising autism spectrum disorder (ASD), sensory processing disorder, or combinations thereof [Ganesh Abstract]; wherein, when the subject has ASD, the sensor data being equal to or exceeding target data correlates with the onset of a pre-meltdown stage for the subject [When the sensor values exceed the pre-configured threshold values, the system determines that the patient has reached the pre-meltdown phase, also called antecedent to the meltdown phase (Ganesh Col 3:9-12)].
Regarding claim 3, Ganesh teaches
The method of claim 2, wherein the step of delivering audible sound therapy to the subject prevents the onset of a meltdown stage for the subject or decreases the severity of a meltdown stage for the subject [When the system and method detect that the patient has entered the pre-meltdown phase, the system triggers a set of activities. These activities include providing multiple options to deliver a therapeutic calming response to the patient to prevent further escalation of the patient's stress levels (Ganesh Col 3:13-18)].
Regarding claim 4, Ganesh teaches
The method of claim 1, wherein the audio sound track comprises a song [Ganesh Col 6:42-48], a music album, an audio book chapter, an audio book, a recited poem, a collection of recited poems, or combinations thereof.
Regarding claim 5, Ganesh teaches
The method of claim 1, wherein the motion data comprise motion frequency and/or motion intensity; wherein the target motion data comprise target motion frequency and/or target motion intensity, respectively [Ganesh Col 5:20-36, 6:11-15, 27-30]; and wherein, when the motion data of the subject is equal to or exceed the target motion data, the audible sound therapy is delivered to the subject at least until the target motion frequency and/or target motion intensity exceed the motion frequency and/or motion intensity, respectively, of the subject [Ganesh Col 8:26-41, Fig. 10, see Examiner’s analysis above].
Regarding claim 6, Ganesh teaches
The method of claim 1, wherein the physiological data comprise heart rate, blood pressure, respiration rate, breathing pattern, oxygen saturation rate, muscle tension level, temperature, one or more electrocardiogram (ECG) features, one of more electromyogram (EMG) features, or combinations thereof [Ganesh Col 5:20-36, 6:11-15, 27-30], and wherein, when the physiological data of the subject is equal to or exceed the target physiological data, the audible sound therapy is delivered to the subject at least until the target physiological data exceed the physiological data of the subject [Ganesh Col 8:26-41, Fig. 10, see Examiner’s analysis above].
Regarding claim 7, Ganesh teaches
The method of claim 1, wherein the sound data comprise vocal sounds produced by the subject [Ganesh Col 5:20-36, 6:11-15, 27-30]; and wherein, when the sound data of the subject is equal to or exceed the target sound data , the audible sound therapy is delivered to the subject at least until the target sound data exceed the sound data of the subject [Ganesh Col 8:26-41, Fig. 10, see Examiner’s analysis above].
Regarding claim 8, Ganesh teaches
The method of claim 1 further comprising receiving, by a control system, the sensor data from the one or more wearable sensors; wherein the control system comprises at least one processor and at least one controller [The controller 101 includes a microprocessor/microcontroller 410 that may be interfaced with wearable device communication interface 411, storage module 406, and real time clock 407. The microprocessor 410 interfaces with environmental sensor system 102, physiological sensor system 103, and therapy device 104 (Ganesh Col 5:9-15, Fig. 7)]; wherein the at least one processor compares the sensor data to the target data; wherein, when at least one of the sensor data are equal to or exceed the target data, the at least one processor signals the at least one controller; and wherein the at least one controller delivers the audible sound therapy to the subject [Ganesh Col 6:11-15, 27-30, 8:26-41, Fig. 10].
Regarding claim 9, Ganesh teaches
The method of claim 8, wherein the control system provides for real-time delivery of the audible sound therapy to the subject [Ganesh Col 8:26-41, Fig. 10, wherein the polling of real time information to determine whether to deliver therapy is considered to read on real-time delivery].
Regarding claim 10, Ganesh teaches
The method of claim 8, wherein the control system selects the audio sound track that is familiar to the subject from a library of audio sound tracks that are familiar to the subject [Ganesh Col 6:42-48; a mobile computing and/or communication system 1700 within which a set of instructions when executed and/or processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein (Ganesh Col 8:59-63), wherein the use of an electronic device comprising a memory being configured to execute instructions to play favorite music or play a gentle audible alert is considered to read on selecting an audio sound track from audio sound tracks stored in a memory (library)]; and wherein said selection is based on the type of sensor data of the subject that is equal to or exceed the target data [Ganesh Col 6:30-34, wherein the selection of any music based on any threshold being exceeded is considered to read on the claimed limitation] and/or the magnitude of difference between the sensor data of the subject and the target data.
Regarding claim 14, Ganesh teaches
A system comprising:
one or more wearable sensors configured to detect sensor data of a subject; wherein the one or more wearable sensors comprise at least one sensor configured to detect motion data, at least one sensor configured to detect sound data, at least one sensor configured to detect physiological data, or combinations thereof [The environmental sensor system 102, the physiological sensor system 103, and therapy device 104 can be distributed and worn at different parts of the body (Ganesh Col 4:48-50); The signals of pre-autistic meltdown may be gathered through at least two sets of sensor systems:… Category B: Sensors 103 that detect physiological stress symptoms, comprising accelerometers that may detect restlessness, galvanic skin response sensors that may detect perspiration levels, flex resistors that may detect muscle tension, pulse oximetry sensors that detect various types of breathing patterns, including hypoventilation, when the patient is breathing room air, microphone that detects patient's audible frequency and vocal patterns (Ganesh Col 5:20-36)]; and
a control system configured to receive the sensor data of the subject from the one or more wearable sensors; wherein the control system comprises at least one processor and at least one controller [The controller 101 includes a microprocessor/microcontroller 410 that may be interfaced with wearable device communication interface 411, storage module 406, and real time clock 407. The microprocessor 410 interfaces with environmental sensor system 102, physiological sensor system 103, and therapy device 104 (Ganesh Col 5:9-15, Fig. 7)]; wherein the at least one processor compares the sensor data to target data; wherein, when at least one of the sensor data are equal to or exceed the target data, the at least one processor is configured to signal the at least one controller; and wherein the at least one controller delivers an audible sound therapy to the subject; wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is characterized by a track rhythm and by a track beat, wherein the audio sound track is familiar to the subject [Periodically, the readings from the environmental sensor system 102 and the physiological sensor system 103, are monitored by the microprocessor 410 and compared with the sensors' corresponding threshold values 515… The threshold values indicate the normal non-meltdown range of the sensors specific to each individual patient. When the readings from the sensors cross the thresholds, the response initiated by microprocessor 410 comprises of the following: Calming response to the patient is implemented through the calming response module 104… The actions that are dynamically controlled by the calming response module 104 include (example embodiments are shown in FIG. 2, FIG. 3, and FIG. 4)… playing favorite music, playing a discrete gentle audible alert (Ganesh Col 6:11-15, 27-34, 42-48)], and wherein the audio sound track is repeated at least until it is determined that the target data exceed the sensor data of the subject [The therapeutic calming response characteristics of the calming response module 104, such as duration and intensity of the responses, can be controlled by the ‘configure the calming response module 517’ that is located on the caregivers' mobile device 521 (Ganesh Col 6:51-55); After the first iteration 702 of polling and recording the sensor, location and real time clock information into the storage module, the process is repeated periodically until any sensor value exceeds its corresponding threshold 703. If the latter occurs,… Independent of an active connection between the wearable module and the caregiver's mobile device, if the wearable device has been pre-configured to deliver therapy 706, then corresponding therapy is delivered to the patient… This sequence of steps repeat going back to the collection of the sensor, location, and real time clock information 702 (Ganesh Col 8:26-41, Fig. 10), wherein the loop depicted in Fig. 10 of Ganesh is considered to maintain the audible sound therapy so long as the sensor thresholds (target data) are exceeded, such that when the sensor thresholds are not exceeded, the therapy is not activated in the next iteration of the loop].
Regarding claim 15, Ganesh teaches
The system of claim 14, wherein the at least one processor compares the motion data, the sound data, the physiological data, or combinations thereof of the subject with target motion data, target sound data, target physiological data, or combinations thereof, respectively [Ganesh Col 5:20-36, 6:11-15, 27-30].
Regarding claim 17, Ganesh teaches
The system of claim 14 further comprising an audio speaker, earbuds, and/or headphones, wherein the audible sound therapy is delivered via the audio speaker, earbuds, and/or headphones without requiring assistance from a caregiver [Therapy devices (not shown) may include… miniature audible speakers, earphone connectors, wireless BLUETOOTH™ transmitters for earphones,… and the like (Ganesh Col 5:1-6); Ganesh Col 8:26-41, Fig. 10, wherein the therapy is noted as being delivered independent of any caregiver interaction].
Regarding claim 18, Ganesh teaches
The system of claim 14, wherein the subject has autism spectrum disorder (ASD) [Ganesh Abstract]; wherein the sensor data being equal to or exceeding target data correlates with the onset of a pre-meltdown stage for the subject [When the sensor values exceed the pre-configured threshold values, the system determines that the patient has reached the pre-meltdown phase, also called antecedent to the meltdown phase (Ganesh Col 3:9-12)]; and wherein the audible sound therapy prevents the onset of a meltdown stage for the subject or decreases the severity of a meltdown stage for the subject [When the system and method detect that the patient has entered the pre-meltdown phase, the system triggers a set of activities. These activities include providing multiple options to deliver a therapeutic calming response to the patient to prevent further escalation of the patient's stress levels (Ganesh Col 3:13-18)].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 11-13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ganesh, as applied to claims 10 and 14 above, in view of Inz (US-20220165393-A1).
Regarding claim 11, Ganesh teaches
The method of claim 10, wherein acquiring sensor data from one or more wearable sensors further comprises receiving, by at least one computing device, the sensor data from the one or more wearable sensors [The environmental sensor system 102, the physiological sensor system 103, and therapy device 104 can be distributed and worn at different parts of the body (Ganesh Col 4:48-50)]; wherein the at least one computing device comprises the at least one processor [Ganesh Col 5:9-15, Fig. 7]; wherein comparing the sensor data of the subject with target data further comprises providing, by the at least one computing device, an input to a model; wherein the model is configured to evaluate the sensor data of the subject with respect to the target data to determine the presence or the absence of a pre-meltdown stage for the subject; wherein the at least one processor, based upon a determination of the presence of a pre-meltdown stage for the subject, signals the at least one controller; and wherein the at least one controller delivers the audible sound therapy to the subject [Ganesh Col 6:30-34, wherein the input of data into an assessment to determine whether the data exceeds a threshold is considered to read on a model].
However, Ganesh fails to explicitly disclose wherein the model is trained with sensor data from the subject and/or with sensor data from at least one additional subject.
Inz discloses systems and methods for detecting the onset of a behavioral disorder using a combination of motion, sound, and physiological sensor data of a subject for the purposes of mitigating or eliminating the severity of the onset behavioral disorder [The following physiological indicators and physiological measures can be used in various combinations to detect the onset of a panic attack or other behavioral disorders: heart rate; heart rate variability (HRV); electrocardiogram (ECG); core body temperature; heat flow off the body; respiratory rate; galvanic skin response (GSR); electromyography (EMG); electroencephalography—Fast Fourier transform analysis (EEG-FFT); electrooculogram (EOG); blood pressure; hydration level; muscle pressure; activity level; skin temperature; body position and posture; acceleration; and voice tone (Inz ¶0009); By continuously receiving feedback on the user's physiological status in the form of detection signals, the user is able to detect the onset of the disorder, to observe a connection between one's mental state and the feedback signals, and, with practice and guidance, learn how to control, mitigate, or eliminate the disorder (Inz ¶0010)], wherein Inz discloses the use of at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject, wherein the at least one machine learning model is configured to evaluate the sensor data of the subject with respect to target data and provide an evaluation result comprising an indication that the sensor data of the subject is equal to or exceed the target data; wherein the indication that the sensor data of the subject is equal to or exceed the target data is an evaluation score being equal to or greater than a threshold score value [In the supervised learning mode, explicit labels are provided by the user 110, by a therapist 180, or by another party that indicate the mental, emotional, or behavioral status of the user 110. These labels are paired with the features and a mapping from features to labels is formed via associative or supervised learning (Inz ¶0080); The detection and classification signals may vary according to the disorder but a preferred embodiment for most disorders comprises a multi-level detection signal that indicates that a disorder event has been detected if the level is above a threshold. The level of the detection signal corresponds to the severity of the disorder event. The number of levels of the detection signal corresponds to the number of severity levels (e.g., in the previous example above there are four levels: normal, mild, moderate, and intense). In an alternative embodiment, the detection signal is a binary signal that indicates the presence or absence of a disorder event (Inz ¶0092); The signal processing subsystem 210, or more specifically the machine learning subsystem 360 of FIGS. 6 and 9, generates a disorder event detection signal. The disorder event detection signal is based on the classification signals generated by the supervised learning subsystem 630 of the machine learning subsystem 360 of FIG. 9. In an example embodiment, if any of the non-normal disorder classes are active then the disorder event detection signal indicates that a disorder event has been detected (Inz ¶0093)].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Ganesh to employ wherein the model is trained with sensor data from the subject and/or with sensor data from at least one additional subject, as a trained machine learning model may allow for personalized detection for the subject to enhance accuracy [Thus, the system also includes a subsystem to adjust and improve the algorithm given feedback from a specific user through machine learning. The algorithm becomes personalized to a user through user feedback where false positives and negatives for symptoms or psychological events are indicated. Through machine learning, the system adjusts to errors indicated by the user and becomes more customized to an individual (Inz ¶0017)], and is further considered to amount to mere application of a known technique [machine learning for classification/event detection] to a known device (method, or product) ready for improvement to yield predictable results [MPEP § 2143(I)(D)].
Regarding claim 12, Ganesh in view of Inz teaches
The method of claim 11, wherein evaluating the sensor data of the subject with respect to the target data further comprises determining, by the at least one computing device, that the sensor data of the subject is equal to or exceed the target data [Ganesh Col 6:30-34]; wherein determining, by the at least one computing device, that the sensor data of the subject is equal to or exceed the target data further comprises comparing at least one evaluation score with at least one threshold score value and determining that the evaluation score is equal to or exceeds the threshold score value [see § 103 modification above; Inz ¶¶0092-0093].
Regarding claim 13, Ganesh in view of Inz teaches
The method of claims 12, wherein step (d) of delivering audible sound therapy to the subject further comprises delivering audible sound therapy to the subject by the at least one controller at least until the threshold score value exceeds the evaluation score [wherein based on the § 103 modification of claims 11-12 above, the determination to of the target data being exceed is based on an evaluation score exceeding a threshold score value].
Regarding claim 16, Ganesh in view of Inz teaches
The system of claim 14, wherein the control system further comprises (i) at least one model, wherein the at least one model is configured to evaluate the sensor data of the subject with respect to target data [Ganesh Col 6:30-34, wherein the input of data into an assessment to determine whether the data exceeds a threshold is considered to read on a model]; and (ii) a non-transitory computer readable medium that stores instructions [Ganesh Col 8:59-63] that when executed by the processor, causes the processor to: receive, using the control system, an input comprising sensor data of the subject; provide, by the control system, the input to the at least one model; determine, by the control system, an evaluation result comprising an indication that the sensor data of the subject is equal to or exceed the target data by using the at least one model; and deliver, by the control system, audible sound therapy to the subject, wherein the audible sound therapy comprises an audio sound track, wherein the audio sound track is familiar to the subject, and wherein the audio sound track is configured to be repeated at least until the target data exceed the sensor data of the subject [Ganesh Col 6:11-15, 27-34, 42-48].
However, Ganesh fails to explicitly disclose wherein the model is at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject; wherein the indication that the sensor data of the subject is equal to or exceed the target data is an evaluation score being equal to or greater than a threshold score value.
Inz discloses systems and methods for detecting the onset of a behavioral disorder using a combination of motion, sound, and physiological sensor data of a subject for the purposes of mitigating or eliminating the severity of the onset behavioral disorder [Inz ¶¶0009-0010)], wherein Inz discloses the use of at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject, wherein the at least one machine learning model is configured to evaluate the sensor data of the subject with respect to target data and provide an evaluation result comprising an indication that the sensor data of the subject is equal to or exceed the target data; wherein the indication that the sensor data of the subject is equal to or exceed the target data is an evaluation score being equal to or greater than a threshold score value [Inz ¶¶0080, 0092-0093].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Ganesh to employ wherein the model is at least one machine learning model that is trained with sensor data from the subject and/or with data from at least one additional subject and wherein the indication that the sensor data of the subject is equal to or exceed the target data is an evaluation score being equal to or greater than a threshold score value, as a trained machine learning model may allow for personalized detection for the subject to enhance accuracy [Thus, the system also includes a subsystem to adjust and improve the algorithm given feedback from a specific user through machine learning. The algorithm becomes personalized to a user through user feedback where false positives and negatives for symptoms or psychological events are indicated. Through machine learning, the system adjusts to errors indicated by the user and becomes more customized to an individual (Inz ¶0017)], and is further considered to amount to mere application of a known technique [machine learning for classification/event detection] to a known device (method, or product) ready for improvement to yield predictable results [MPEP § 2143(I)(D)].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEVERO ANTONIO P LOPEZ whose telephone number is (571)272-7378. The examiner can normally be reached M-F 9-6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Marmor II can be reached at (571) 272-4730. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEVERO ANTONIO P LOPEZ/Examiner, Art Unit 3791