DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Objections
Claims 1-20 are objected to because of the following informalities:
Claim 1 is missing a semi-colon at the end of the “measuring” limitation”.
Dependent claims 2-20 inherit the deficiencies of their respective parent claims, and are thus objected to under the same rationale.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3-12 and 14-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 3, it is unclear how the score assigned to the at least one audio sample measurement can be input prior to the measuring. In order to be scored, the audio sample must first be measured. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought.
Regarding claims 4, 6, and 7, it is unclear how the device is trained to assign a predetermined threshold to at least one audio test input when “the predetermined threshold is determined by a manufacturer” as claimed in claims 4 and 7 or when “the predetermined threshold is determined by the at least one user prior to the measuring step” as claimed in claim 6. In other words, if the predetermined threshold is determined by a manufacturer (claims 4 and 7) or by at least one user (claim 6), how is the device trained to assign a predetermined threshold? Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought.
Regarding claims 5, 8, and 14, each of these claims recites “training at least one ML or AI algorithm”. Reciting an abbreviation without explicitly identifying what the abbreviation refers to leaves one of ordinary skill in the art to not be apprised of the metes and bounds of the patent protection sought. For the purposes of compact prosecution, “ML” is construed as “machine learning” and “AI” is construed as “artificial intelligence”. Dependent claim 15 inherits its deficiencies from its respective parent claims, and is thus rejected under the same rationale.
Regarding claim 9, it is unclear what distinguishes alerting the at least one user from emitting at least one frequency, emitting at least one sound, emitting at least one fragrance, emitting at least one light, sending at least one notification, or any combination thereof. Similarly, it is unclear what distinguishes sending at least one notification from alerting the at least one user, emitting at least one frequency, emitting at least one sound, emitting at least one fragrance, emitting at least one light, or any combination thereof. One of ordinary skill in the art would reasonably understand that sending at least one notification is another way of reciting alerting the at least one user. Additionally, one of ordinary skill in the art would also reasonably understand that emitting at least one frequency, emitting at least one sound, emitting at least one fragrance, emitting at least one light, or any combination thereof to be also be alerting the at least one user/sending at least one notification. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. Dependent claims 10-15 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claim 10, it is unclear how emitting at least one frequency comprises emitting at least one light when emitting at least one light is claimed as an alternative to emitting at least one frequency in parent claim 9. In other words, claim 10 includes “where the at least one ameliorative action comprises: emitting at least one light or emitting at least one light”. Thus, it is unclear how “emitting at least one light” is an alternative to itself. Therefore, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought.
Regarding claim 11, it is unclear what distinguishes an alarm from music, a pitch, a tone, a voice, or any combination thereof. One of ordinary skill in the art would reasonably understand that music, a pitch, a tone, a voice, or any combination thereof to be also be an alarm. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought.
Regarding claim 12, it is unclear what distinguishes an alert to a police department from an alert to an emergency department. One of ordinary skill in the art would reasonably understand that a police department is an emergency department. The disclosure does not aid understanding as it merely recites similar language as the claim without any further detail. Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought.
Regarding claim 16, it is unclear how training the device to assign a predetermined threshold to at least one audio test input, where the at least one audio test input comprises at least one audio test measurement of an ambient environment, comprises “instructing the at least one user to assign a predetermined threshold to the at least one audio measurement.” In other words, how is the device trained to perform the function of assigning when the at least one user performs the function of assigning? Thus, one of ordinary skill in the art would not be apprised of the metes and bounds of the patent protection sought. Dependent claims 17 and 18 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1 and 4-8, the disclosure fails to provide sufficient written description for “training the device to assign a predetermined threshold to at least one audio test input, where the at least one audio test input comprises at least one audio test measurement of an ambient environment, where the at least one audio test measurement is selected from the group consisting of: a decibel level, at least one speech characteristic or any combination thereof, and where the predetermined threshold is selected from the group consisting of a maximum decibel level, at least one prohibited speech characteristic or any combination thereof” in claim 1, “where the predetermined threshold is determined by a manufacturer” in claims 4 and 7, “where the predetermined threshold is determined by training at least one ML or AI algorithm” in claims 5 and 8, and “where the predetermined threshold is determined by the at least one user prior to the measuring step” in claim 6 to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. See, for example at least Fig. 6 and para. 36, 45, 46, 56, 61-67, 70-75. In particular, flow chart of Fig. 6 and the text of para. 36, 46, 56, and 61 merely recite similar language as the claims without any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality while para. 62-67 and 70-75 generically describe what machine learning and artificial intelligence are without any description of how machine learning and/or artificial intelligence is used in the claimed invention, how such algorithms are trained in the claimed invention, nor what parameters bound such algorithms in the claimed invention. Therefore, such a limitation lacks an adequate written description because an indefinite, unbounded limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See MPEP 2163.03(VI). Dependent claims 2-20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Regarding claim 2, the disclosure fails to provide sufficient written description for “where the at least one speech characteristic comprises: a quantity of swear words uttered by at least one user, a quantity of slurred words uttered by the at least one user, a quantity of words indicating suicidal ideation uttered by the at least one user, a pitch of the speech, a tone of the speech, a timbre of the speech, a change in volume of a voice of the at least one user, or any combination thereof” to show one of ordinary skill in the art that Applicant had possession of the claimed invention. Claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed It is not enough that one skilled in the art could write a program to achieve the claimed function because the specification must explain how the inventor intends to achieve the claimed function to satisfy the written description requirement. See MPEP 2161.01(I). The written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. See, for example at least para. 38 and 87 which merely recite similar language as the claim without any meaningful description of the steps, calculations, or algorithms necessary to perform the claimed functionality. For instance, the disclosure is silent regarding any description for any of the claimed speech characteristics, let alone any description of any analysis of a speech characteristic. Therefore, such a limitation lacks an adequate written description because an indefinite, unbounded limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See MPEP 2163.03(VI). Dependent claims 2-20 inherit the deficiencies of their respective parent claims, and are thus rejected under the same rationale.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without including additional elements that are sufficient to amount to significantly more than the judicial exception itself.
Step 1
The instant claims are directed to a method which falls under at least one of the four statutory categories (STEP 1: YES).
Step 2A, Prong 2
Independent claim 1 recites:
A method comprising:
obtaining a device, where the device is configured to be operated by at least one user;
training the device to assign a predetermined threshold to at least one audio test input, where the at least one audio test input comprises at least one audio test measurement of an ambient environment,
where the at least one audio test measurement is selected from the group consisting of: a decibel level, at least one speech characteristic or any combination thereof, and
where the predetermined threshold is selected from the group consisting of a maximum decibel level, at least one prohibited speech characteristic or any combination thereof;
measuring at least one audio sample input, to obtain at least one audio sample measurement, where the at least one audio sample measurement is selected from the group consisting of: a current decibel level, at least one current speech characteristic or any combination thereof
assigning a score to the at least one audio sample measurement;
evaluating whether the score exceeds the predetermined threshold, and:
when the score exceeds a predetermined threshold, performing, with the device, at least one ameliorative action; and
when the score does not exceed the predetermined threshold, repeating the measuring, assigning, and evaluating steps until the score exceeds the predetermined threshold.
All of the foregoing underlined elements identified above amount to the abstract idea grouping of a certain method of organizing human activity because they amount to managing personal behavior or interactions between people (including social activities, teaching, and following rules or instructions) by merely collecting information, analyzing the collected information, and outputting the results of the collection and analysis in an iterative manner. These elements are also interpreted as a series of steps that could reasonably be performed by mental processes with the aid of pen and paper because the claims, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observation, evaluation, judgment, opinion) but for the recitation of generic computer components. See MPEP 2106.04(a)(2)(III)(C) - A Claim That Requires a Computer May Still Recite a Mental Process. Even if humans would use a physical aid to help them complete the recited steps, the use of such physical aid does not negate the mental nature of these limitations.
The dependent claims, except for claim 20, amount to merely further defining the judicial exception.
Therefore, the claims recite a judicial exception. (STEP 2A, PRONG 1: YES).
Step 2A, Prong 2
This judicial exception is not integrated into a practical application because the independent and dependent claims do not include additional elements that are sufficient to integrate the exception into a practical application under the considerations set forth in MPEP 2106.04(d). The elements of the claims above that are not underlined constitute additional elements.
The following additional elements, both individually and as a whole, merely generally link the judicial exception to a particular technological environment or field of use: a device (claim 1), a memory (claim 17), at least one sensor (claim 20), and at least one processor (claim 20). This is evidenced by the manner in which these elements are disclosed in the drawings and the instant specification. For example, Fig. 4 merely illustrates these elements as non-descript black boxes, while para. 26-35, 59-69, 72-79, and 81-137 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement and illustrate that the claimed invention is merely using a software application to cause a computer to implement the judicial exception. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. In particular, the mere use of a sensor as claimed, merely adds insignificant extrasolution data gathering activity to the judicial exception. Similarly, in the event that “at least one ML or AI algorithm” is considered an additional element, the mere use of at least one machine learning or artificial intelligence algorithm, and artificial intelligence as a whole, does not improve computer functionality as it merely invokes the use of a computer or other machinery in its ordinary capacity to process information. The claims are silent regarding any specific rules with specific characteristics that improve the functionality of the computer system. None of the hardware offer a meaningful limitation beyond generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Again, this is evidenced by the manner in which these elements are disclosed in the drawings and specification as identified above. It should be noted that because the courts have made it clear that mere physicality or tangibility of an additional element or elements is not a relevant consideration in the eligibility analysis, the physical nature of the additional elements does not affect this analysis. See MPEP 2106.05(I) for more information on this point, including explanations from judicial decisions including Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 224-26 (2014). Additionally, the claims do not apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition nor do they apply or use a judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. For instance, the disclosure identifies that the claimed invention is generically drawn towards detecting audible distress signals and performing some action in response to the detected signal. See, for example, at least para. 8 and 19 of the specification. It is particularly noted that para. 19 identifies that the action may only consist of a prompt or alert, which one of ordinary skill in the art would recognize as not necessarily ameliorative. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (STEP 2A, PRONG 2: YES).
Step 2B
The independent and dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under the considerations set forth in MPEP 2106.05. As identified in Step 2A, Prong 2, above, the claimed system and the process it performs does not require the use of a particular machine, nor does it result in the transformation of an article. Although the claims recite elements, identified above, for performing at least some of the recited functions, these elements are recited at a high level of generality in a conventional arrangement for performing their basic computer functions (i.e., receiving, processing, outputting data). This is evidenced by the manner in which these elements are disclosed in the instant specification. For example, Fig. 4 merely illustrates these elements as non-descript black boxes or stock images, while para. 26-35, 59-69, 72-79, and 81-137 merely provide stock descriptions of generic computer hardware and software components in any generic arrangement and illustrate that the claimed invention is focused on a software application that merely causes a computer to implement the judicial exception. Thus, the computer components are merely an attempt to link the abstract idea to a particular technological environment, but do not result in an improvement to the technology or computer functions employed. The claims do not recite any specific rules with specific characteristics that improve the functionality of the computer system. Thus, the focus of the claimed invention is on the analysis of the collected data, which is itself at best merely an improvement within the abstract idea. See pg. 2-3 in SAP America Inc. v. lnvestpic, LLC (890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018) which proffered “[w]e may assume that the techniques claimed are groundbreaking, innovative, or even brilliant, but that is not enough for eligibility. Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. The claims here are ineligible because their innovation is an innovation in ineligible subject matter. Their subject is nothing but a series of mathematical calculations based on selected information and the presentation of the results of those calculations. Furthermore, the steps are merely recited to be performed by, or using, the elements while the specification makes clear that the computerized system itself is ancillary to the claimed invention as identified above. This further identifies that none of the hardware offer a meaningful limitation beyond, at best, generally linking the performance of the steps to a particular technological environment, that is, implementation via computers. Viewed as a whole, these additional claim elements do not provide meaningful limitation to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea of itself (STEP 2B: NO).
Therefore, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 4, 5, 7-15, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gutta et al. (US 2002/0169583, hereinafter referred to as Gutta).
Regarding claim 1, Gutta teaches a method comprising:
obtaining a device, where the device is configured to be operated by at least one user (Gutta, Fig. 1 illustrates this.);
training the device to assign a predetermined threshold to at least one audio test input, where the at least one audio test input comprises at least one audio test measurement of an ambient environment (Gutta, para. 51, “Inputs of various modalities 500 such as… audio data, environmental conditions such as… sound level…. etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.” Para. 98, “For example, an audio signal may be filtered by a bandpass filter set for detection of loud crashing sounds and a detector that sets a time-latch output when the filter output is above certain level.” Para. 99, “Preferably, however, alarms should be informative as possible within the specified design criteria. For example, an alarm signal may contain audio and/or video data preceding and following the event(s) that triggered the alarm status.”),
where the at least one audio test measurement is selected from the group consisting of: a decibel level, at least one speech characteristic or any combination thereof (Gutta, para. 51, “Inputs of various modalities 500 such as… audio data, environmental conditions such as… sound level…. etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.”), and
where the predetermined threshold is selected from the group consisting of a maximum decibel level, at least one prohibited speech characteristic or any combination thereof (Gutta, para. 54-62, “To illustrate, the signal generated by the audio classifier may be a vector that includes the following components. 1. Identity of speaker, 2. Number of speakers, 3. Type of sound (crashing, bumping, periodic, tapping, etc.) 4. Sound intensity level, 5. Duration, time of day, of distinguished sound, 6. Quality of speech (whispering, yelling, rapid, etc.) 7. Quality of voice (masculine, feminine, child, etc.), 8. Identified event (switching of a light, snoring, tinny sound of a radio or TV, vacuum cleaner, etc.)”; para. 74, “The result of this parsing is the extraction of words or utterance features that the mental state/health status classifier 290 may recognize… Words indicative of mood may then be sent to the mental state/health status classifier 290 for classification of the mood of the speaker.”);
measuring at least one audio sample input, to obtain at least one audio sample measurement, where the at least one audio sample measurement is selected from the group consisting of: a current decibel level, at least one current speech characteristic or any combination thereof (Gutta, para. 26, “8. loud sounds, normal sounds, and unusual sounds, based upon signature of sound”; para. 51, “Inputs of various modalities 500 such as video data, audio data, environmental conditions such as temperature, sound level, security system status, etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.” Para. 74, “The result of this parsing is the extraction of words or utterance features that the mental state/health status classifier 290 may recognize. Parsing may be done using rule-based template matching as in conversation simulators or using more sophisticated natural language methods. Words indicative of mood may then be sent to the mental state/health status classifier 290 for classification of the mood of the speaker.” Para. 77, “a low incidence of words suggesting enthusiasm such as superlatives (input parser 410 signal indicating adjectives)”; para. 78, “a quiet flat tone in the voice (audio classifier 210 signal indicating modulation inflection intensity)”; para. 84, “the pitch of the occupant’s voice”),
assigning a score to the at least one audio sample measurement (Gutta, Fig. 4, Buffer signals 1..N S10; para. 102, “One way to handle this is to assign a signature to each alarm based on a vector of the components that gave rise to the alarm condition… The components may be quantized to insure against small differences in vector components being identified as different or a low sensitivity comparison may be used to achieve the same effect.”);
evaluating whether the score exceeds the predetermined threshold (Gutta, Fig. 4, Alarm J condition), and:
when the score exceeds a predetermined threshold, performing, with the device, at least one ameliorative action (Gutta, Fig. 4, Alarm J overridden? S15, No, Generate alarm message S20 – Transmit alarm message S40); and
when the score does not exceed the predetermined threshold, repeating the measuring, assigning, and evaluating steps until the score exceeds the predetermined threshold (Gutta, Fig. 4, Buffer signals 1..N S10, No alarm).
Regarding claim 2, Gutta teaches the method of claim 1, where the at least one speech characteristic comprises: a quantity of swear words uttered by at least one user, a quantity of slurred words uttered by the at least one user, a quantity of words indicating suicidal ideation uttered by the at least one user, a pitch of the speech, a tone of the speech, a timbre of the speech, a change in volume of a voice of the at least one user, or any combination thereof (Gutta, para. 26, “8. loud sounds, normal sounds, and unusual sounds, based upon signature of sound”; para. 51, “Inputs of various modalities 500 such as video data, audio data, environmental conditions such as temperature, sound level, security system status, etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.” Para. 74, “The result of this parsing is the extraction of words or utterance features that the mental state/health status classifier 290 may recognize. Parsing may be done using rule-based template matching as in conversation simulators or using more sophisticated natural language methods. Words indicative of mood may then be sent to the mental state/health status classifier 290 for classification of the mood of the speaker.” Para. 77, “a low incidence of words suggesting enthusiasm such as superlatives (input parser 410 signal indicating adjectives)”; para. 78, “a quiet flat tone in the voice (audio classifier 210 signal indicating modulation inflection intensity)”; para. 84, “the pitch of the occupant’s voice”).
Regarding claim 4, Gutta teaches the method of claim 1, where the predetermined threshold is determined by a manufacturer (Gutta, para. 91, “The mental state/health status classifier 290 outputs a state vector, with a number of degrees of freedom, that corresponds to the models of personality and mental state chosen by the designer.” The designer is construed as a manufacturer.).
Regarding claim 5, Gutta teaches the method of claim 1, where the predetermined threshold is determined by training at least one ML or AI algorithm (Gutta, para. 51, “Inputs of various modalities 500 such as video data, audio data, environmental conditions such as temperature, sound level, security system status, etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.”).
Regarding claim 7, Gutta teaches the method of claim 1, where the predetermined threshold is determined by a manufacturer (Gutta, para. 91, “The mental state/health status classifier 290 outputs a state vector, with a number of degrees of freedom, that corresponds to the models of personality and mental state chosen by the designer.” The designer is construed as a manufacturer.).
Regarding claim 8, Gutta teaches the method of claim 1, where the predetermined threshold is determined by training at least one ML or AI algorithm (Gutta, para. 51, “Inputs of various modalities 500 such as video data, audio data, environmental conditions such as temperature, sound level, security system status, etc. are applied to a trained classifier 510 to discriminate and classify distinguishable features of a monitored environment.”).
Regarding claim 9, Gutta teaches the method of claim 1, where the at least one ameliorative action comprises: alerting the at least one user, emitting at least one frequency, emitting at least one sound, emitting at least one fragrance, emitting at least one light, sending at least one notification, or any combination thereof (Gutta, at least para. 99-107 discuss this).
Regarding claim 10, Gutta teaches the method of claim 9, where the emitting of the at least one frequency comprises: emitting at least one light, initiating at least one vibration, displaying at least one image, or any combination thereof (Gutta, at least para. 99-107 discuss this).
Regarding claim 11, Gutta teaches the method of claim 9, where the at least one sound comprises: music, an alarm, a pitch, a tone, a mantra, a voice, or any combination thereof (Gutta, at least para. 99-107 discuss this).
Regarding claim 12, Gutta teaches the method of claim 9, where the at least one notification comprises: a text, a phone call, an email, a voicemail, an alert to a police department, an alert to an emergency department, an alert to at least one medical provider, or any combination thereof (Gutta, at least para. 99-107 discuss this).
Regarding claim 13, Gutta teaches the method of claim 9, further comprising prompting the at least one user to dismiss the alert (Gutta, para. 102, “is desirable for a given alarm to be acknowledged so that a new alarm condition, arising from different circumstances, is not confused as the existing alarm currently being attended to. One way to handle this is to assign a signature to each alarm based on a vector of the components that gave rise to the alarm condition. The recognition of the same alarm condition would give rise to another vector which may be compared to a table of existing alarms (at step S15) to see if the new alarm had already been overriden.”).
Regarding claim 14, Gutta teaches the method of claim 13, where upon the at least one user dismissing the alert, the method further comprising training at least one ML or AI algorithm to adjust at least one of: the score, the predetermined threshold, or any combination thereof (Gutta, para. 102, “is desirable for a given alarm to be acknowledged so that a new alarm condition, arising from different circumstances, is not confused as the existing alarm currently being attended to. One way to handle this is to assign a signature to each alarm based on a vector of the components that gave rise to the alarm condition. The recognition of the same alarm condition would give rise to another vector which may be compared to a table of existing alarms (at step S15) to see if the new alarm had already been overriden.”).
Regarding claim 15, Gutta teaches the method of claim 14, where upon an adjustment of at least one of: the score, the predetermined threshold, or any combination thereof, the at least one user will not be alerted when an identical score is assigned to an identical measurement of an identical type (Gutta, para. 102, “is desirable for a given alarm to be acknowledged so that a new alarm condition, arising from different circumstances, is not confused as the existing alarm currently being attended to. One way to handle this is to assign a signature to each alarm based on a vector of the components that gave rise to the alarm condition. The recognition of the same alarm condition would give rise to another vector which may be compared to a table of existing alarms (at step S15) to see if the new alarm had already been overriden. The components may be quantized to insure against small differences in vector components being identified as different or a low sensitivity comparison may be used to achieve the same effect.”).
Regarding claim 20, Gutta teaches the method of claim 1, where the device comprises at least one sensor and at least one processor (Gutta, Fig. 1, Sensors 141, Controller 100).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3, 6, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gutta et al. (US 2002/0169583, hereinafter referred to as Gutta) as applied to claim 1, in view of Jain et al. (US 2012/0289788, hereinafter referred to as Jain).
Regarding claim 3, Gutta teaches the method of claim 1.
Gutta does not explicitly teach where the score is input by the at least one user prior to the measuring.
However, in an analogous art, Jain teaches where the score is input by the at least one user prior to the measuring (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavioral states. In particular embodiments, the selected behavioral state may correspond to a selected psychological state”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for the score being input by the user via the Likert scale as taught by Jain to be included in Gutta because it provides a “corresponding set of control parameters that specify… a self-reported psychological state” wherein the “set of control parameters consists of data parameters that specify when a sensor 112 or data stream is at a normal or expected state.” See Jain at para. 61. In other words, it allows a user to validate the system’s identification of a psychological state.
Regarding claim 6, Gutta teaches the method of claim 1.
Gutta does not explicitly teach where the predetermined threshold is determined by the at least one user prior to the measuring step.
However, in an analogous art, Jain teaches where the predetermined threshold is determined by the at least one user prior to the measuring step (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavioral states. In particular embodiments, the selected behavioral state may correspond to a selected psychological state”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for the score being input by the user via the Likert scale as taught by Jain to be included in Gutta because it provides a “corresponding set of control parameters that specify… a self-reported psychological state” wherein the “set of control parameters consists of data parameters that specify when a sensor 112 or data stream is at a normal or expected state.” See Jain at para. 61. In other words, it allows a user to validate the system’s identification of a psychological state.
Regarding claim 16, Gutta teaches the method of claim 1.
Gutta does not explicitly teach where training the device comprises: presenting the at least one input to the at least one user; instructing the at least one user to measure the at least one audio input to obtain the at least one audio measurement; and instructing the at least one user to assign a predetermined threshold to the at least one audio measurement.
However, in an analogous art, Jain teaches where training the device comprises: presenting the at least one input to the at least one user (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavioral states. In particular embodiments, the selected behavioral state may correspond to a selected psychological state”);
instructing the at least one user to measure the at least one audio input to obtain the at least one audio measurement (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavioral states. In particular embodiments, the selected behavioral state may correspond to a selected psychological state”); and
instructing the at least one user to assign a predetermined threshold to the at least one audio measurement (Jain, para. 215, “a second data set may be collected from the person when he is substantially stressed (such as, for example, when the person reports on mood sensor 400 that he is ‘stressed’ with an intensity of 3 or more on a 0-to-4 Likert scale).”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention for training the device by the score being input by the user via the Likert scale as taught by Jain to be included in Gutta because it provides a “corresponding set of control parameters that specify… a self-reported psychological state” wherein the “set of control parameters consists of data parameters that specify when a sensor 112 or data stream is at a normal or expected state.” See Jain at para. 61. In other words, it allows a user to validate the system’s identification of a psychological state.
Regarding claim 17, Gutta in view of Jain teaches the method of claim 16, where training the device comprises storing a record of the predetermined threshold in a memory of the device (Gutta, para. 95, “a data storage capability and means for determining the current occupant so that corresponding histories can be stored for different occupants… In this way, both the mental state/health status classifier 290 and event/class processor 207 may each correlate historical data with particular occupants and employ it in identifying and signaling trends to the output generator 415.” Jain, para. 75, “Mood sensor 400 may access a local data store (e.g., prior psychological and behavioral input stored on the user’s smart phone)”; para. 216, “analysis system 180 may access the stress index history of a person to determine if the stress index of the person has changed over time.” ).
Regarding claim 18, Gutta in view of Jain teaches the method of claim 16, where instructing the at least one user to measure the at least one audio input comprises displaying a prompt on the device, where the prompt comprises instructions to measure the at least one audio input (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavioral states. In particular embodiments, the selected behavioral state may correspond to a selected psychological state”).
Regarding claim 19, Gutta in view of Jain teaches the method of claim 1.
Gutta does not explicitly teach where the device is trained by the at least one user.
However, in an analogous art, Jain teaches where the device is trained by the at least one user (Jain, para. 61, “The user may touch one or more of the mood icons to input his current mood (i.e., psychological state). Mood intensity widget 440 is a row with numbered icons ranging from one to four that each correspond to a level of intensity of a psychological state. The numbers range from the lowest to highest intensity, with one being the lowest and four being the highest. The user may touch one of the numbers to input an intensity corresponding to a selected mood. In particular embodiments, the mood intensity corresponds to a standard psychometric scale (e.g., Likert scale). Activity input widget 450 is a drop-down menu containing a list of activities (i.e., behavioral states). The list is not illustrated, but could include a variety of behavioral states, such as sleeping, eating, working, driving, arguing, etc. The user may touch the drop-down menu to input one or more behavi