DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-19 are under examination.
CLAIM INTERPRETATION
2. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Instant claims 16 and 17 recite “augmented reality means”. While the claims recite the intended function of “augmented reality means”, the claims do not recite sufficient acts for performing the term. On page 17, the specification states “augmented reality means” includes sounds that enhance the users desire to drink or eat something. For purposes of this examination, “augmented reality means” is interpreted to include a notification for the user to eat or drink something.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception without significantly more.
Claims 1-17 are directed to method of detecting or quantifying a liquid, food or medication intake of a user wearing a hearing device. As described in Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S._, 134 S. Cr. 2347, 110 U.S.P.Q.2d 1976 (2014), a two-step analysis is required in considering the patent eligibility of the claimed subject matter. The first step requires determining if the claimed subject matter is directed to a judicial exception. The instant claims require the steps of analyzing audio signals by applying a machine learning algorithm and determining values indicative of how often or the amount of food, liquid, or medication is ingested by the user. However applying a machine learning algorithm to determine values is a mathematical algorithm. Dependent claims 2-17 are drawn to additional mathematical steps, or the data to be inputted into the mathematical algorithm, or the data outputted from the mathematical algorithm. The courts have found mathematical algorithms to be drawn to the judicial exception of an abstract idea (In re Grams, 888 F.2d 835, 12 U.S.P.Q.2d 1824 (Fed. Cir. 1989)). Thus, the instant claims are drawn to a judicial exception.
This judicial exception is not integrated into a practical application. The instant claims do not recite an element that reflects an improvement in the functioning of a computer or other technology, an element that applies the judicial exception to effect a particular treatment, an element that implements the judicial exception with a particular machine, or an element that effects a transformation of a particular article to a different state or thing. The instant claims recite receiving a signal, collecting a signal, storing values, and generating an output. However, these steps are extra-solution data gathering or data output steps. Extra solution activity does not impart a practical application to a judicial exception. The instant claims recite the elements of a hearing device, hearing system, remote server, cloud, microphone, sensor, physiological sensor, a user interface, processor, and sound output device. However, the instant claims do not recite any structural limitations of these elements. Thus, these elements are not drawn to a particular machine and do not integrate the judicial exception to a practical application.
The second part of the analysis requires determining if the claims include additional elements that are sufficient to amount to significantly more than the judicial exception. The instant claims recite the additional elements receiving a signal, collecting a signal, storing values, and generating an output. However, these are well-understood, conventional and routine data gathering and outputting steps. (MPEP §2106.05(d)(II)). The instant claims recite the elements of a hearing device, hearing system, remote server, cloud, microphone, sensor, physiological sensor, a user interface, processor, and sound output device. However, these elements are well-understood, conventional, and routine components of a hearing device (Specification, paragraphs [0002], [0017], [0045]-[0053], [0060]-[0063], and [0082]). Reciting well-understood, conventional, and routine steps and elements do not transform the judicial exception into patent eligible subject matter. In addition, the recitation of the specific types of data, to be used in the judicial exception does not transform the abstract idea into a non-abstract idea. (See buySAFE, Inc. v Google, Inc. 765 F.3d 1350, 112 U.S.P.Q.2d 1093 (Fed.Cir.2014)). Furthermore, the elements taken as a combination are also well-understood, routine, and conventional, since the elements are merely specifying the types of data for a data gathering step or hearing aid devices. Thus, the instant claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
5. Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter.
Instant claim 18 recites a computer-readable medium, which encompasses carrier waves. (Specification, paragraph [0082]). Carrier waves are non-statutory per se. Thus, the instant claim is drawn to non-statutory subject matter.
Claim Rejections - 35 USC § 102
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 6, 18 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ahmed et al. (US 2019/0231253 A1).
Regarding claim 1, Ahmed et al teach a method for detecting and quantifying liquid or food intake of a user wearing a hearing device (paragraph [0024]) which comprises at least one microphone (paragraphs [0026] and [0028]) where the method includes the steps of receiving an audio signal from the microphone or a sensor signal from a sensor (paragraphs [0043] and [0044]); collecting and analyzing the audio signal or sensor signal to detect each time the user drinks or takes medication or eats something (paragraph [0044]), where the drinking or medication is distinguished from eating and drinking is distinguished from medication intake (paragraphs [0044] and [0056]), to determine values indicative of how often this is detected or the amount of liquid, food, or medication (paragraph [0055]); where the step of analyzing includes applying a machine learning algorithm in the hearing device or hearing system or a remove server or cloud (paragraphs [0048] and [0063]), and storing the determined values in the hearing system and based on the stored values generating a predetermined type of output (i.e. notifications) (paragraphs [0029]-[0032] and [0055]).
Regarding claim 2, Ahmed et al. teach where the machine learning algorithm is applied in its training phase (i.e., “learning stage”) to learn user-specific manners of drinking, eating, or medication intake and the manners are incorporated into future analysis (paragraph [0048]).
Regarding claim 3, Ahmed et al. teach analyzing two or more phases of drinking, eating or medication intake are distinguished in the course of detecting liquid, food, or medication intake (paragraphs [0046] and [0055]), where the analysis is based on different sensors or different machine learning algorithms (paragraphs [0043], [0048] and [0049]).
Regarding claims 4 and 6, Ahmed et al. teach detecting tilting the user’s head from a movement sensor for medication intake (paragraph [0056]).
Regarding claim 18, Ahmed et al. teach a computer readable medium with a program that is adapted to carry out the steps of the method (paragraphs [0030] and [0032]).
Regarding claim 19, Ahmed et al. teach a hearing device including a microphone, a processor, a sound output device and where the hearing device adapted for performing the method (paragraphs [0026] and [0028]).
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 5 are rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (US 2019/0231253 A1) as applied to claims 1-4, 6, 18 and 19 above, and further in view of Connor (US 2015/0379238 A1).
Ahmed et al. is applied as above.
While Ahmed et al. teaches distinguishing between different activities (paragraphs [0044] and [0056]), Ahmed et al. does not teach detecting the phase of medication intake of bringing a medication in contact with the mouth or inserting medication into the mouth.
Regarding claim 5, Connor teaches a method that includes detecting bringing food to the mouth, inserting food into the mouth, chewing or swallowing the food, and lowering the hand (paragraph [0240]).
It would have been obvious for one of ordinary skill in the art, at the time of filing, to combine the teachings of Ahmed et al. and Connor. Both Ahmed et al. and Connor are drawn to detecting food intake (Ahmed et al., paragraph [0055]; Connor, abstract). Connor offers the advantage of being able to detect the physical motions of eating (paragraph [0240]). Thus, one of ordinary skill in the art would have been motivated to incorporate the teachings of Connor into the teachings of Ahmed et al. in order to better monitor food intake. In addition, one of ordinary skill in the art would have had a reasonable expectation of success since a food intake monitoring system incorporate multiple sensors such as the ones taught of Ahmed et al. and Connor.
9. Claims 7-10 and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (US 2019/0231253 A1) as applied to claims 1-4, 6, 18 and 19 above, and further in view of Shalon et al. (US 2006/0064037 A1).
Ahmed et al. is applied as above.
However, Ahmed et al. does not teach using the physiological property to determine the liquid intake.
Regarding claims 7, Shalon et al. teach where the sensor signals comprise physiological signals indicative of a physiological property determines which kind of liquid the user is taking (paragraph [0329]).
Regarding claim 8, Shalon et al. teach where the physiological signal is indicative a cardiovascular property, body fluid analyte level and body temperature (paragraph [0329]).
Regarding claim 9, Shalon et al. teach the amount of water ingested is based on the physiological property (paragraph [0329]).
Regarding claim 10, Shalon et al. teach where the machine learning algorithm is an artificial neural network (paragraph [0256]), where the input data set is sensor data collected over a predetermined period of time (paragraphs [0212], [0256], [0258] and [02061]-[0263]), where the output data set includes the frequency or number of detected liquid or food intakes as well as the duration (paragraph [0261]), where the learning phase is implemented by supervised learning using input sensor data (paragraph [0256]).
Regarding claim 12, Shalon et al. teach where the machine learning method is a Hidden Markov Model (paragraph [0256]).
Regarding claim 13, Shalon et al. teach where the dehydration risk is estimated on the determined values of the amount and frequency of the user’s liquid intake (paragraph [0329]) and the generated output that counsels the user to ingest a lacking amount of liquid (paragraph [0329]).
Regarding claim 14, Shalon et al. teach where the interactive user interface is provided in the hearing system (paragraph [0302]), and where the interface allows the user to input additional information (paragraph [0311]).
Regarding claim 15, Shalon et al. teach where the information to take a medication is stored in the system (paragraph [0326]), where when fluid intake is detected, generating an output based on questioning the user whether he has taken the medication (paragraphs [0213] and [0326]), and transmitting the information to a health care professional (paragraph [0326]).
Regarding claim 16, Shalon et al. teach generating an output based on the frequency and amount of liquid ingested by the user, an output to enhance the user’s desire to drink by an augmented reality means (virtual coach) in the hearing system (paragraphs [0206] and [0329]).
Regarding claim 17, Shalon et al. teach when detecting that the user is drinking, to generate an output to enhance the user’s experience of drinking by augmented reality means (providing feedback in real-time) (paragraphs [0206] and [0326]).
It would have been obvious for one of ordinary skill in the art, at the time of filing, to combine the teachings of Shalon et al. and Ahmed et al. Both Shalon et al. and Ahmed et al. teach using hearables to monitor food intake (Ahmed et al., paragraph [0055] and Shalon et al., paragraph [0102]). Shalon et al. offers the benefit of virtual coach that encourages better behavior (paragraph [0329]). Thus, one of ordinary skill in the art, would have been motivated to incorporate the teachings of Shalon et al. into the teachings of Ahmed et al. to gain the benefit of coaching the user for better health. Furthermore, one of ordinary skill in the art would have had a reasonable expectation of success, because the virtual coach may be readily implemented as software for the system of Ahmed et al.
10. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ahmed et al. (US 2019/0231253 A1) as applied to claims 1-4, 6, 18 and 19 above, and further in view of Pedersen et al. (US 2019/0394586 A1).
Ahmed et al. is applied as above.
However, Ahmed et al. does not teach where the neural network has a hidden layer.
Pedersen et al. teach a hearing device (abstract), which may be used to detect a user’s food intake (paragraph [0001]), utilizes a deep neural network with a hidden layer (paragraphs [0145] and [0147]).
It would have been obvious for one of ordinary skill in the art, at the time of filing, to combine the teachings of Pedersen et al. and Ahmed et al. Both Pedersen et al. and Ahmed et al. teach using hearables to monitor food intake (Ahmed et al., paragraph [0055] and Pedersen et al., paragraph [0001]). Pedersen et al. offers the benefit of distinguishing acoustic events to detect an activity (paragraph [0001]). Thus, one of ordinary skill in the art, would have been motivated to incorporate the teachings of Pedersen et al. into the teachings of Ahmed et al. to gain the benefit of being able to better distinguish acoustic events. Furthermore, one of ordinary skill in the art would have had a reasonable expectation of success, because the analysis taught by Pedersen et al. may be readily implemented into the system of Ahmed et al.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JERRY LIN whose telephone number is (571)272-2561. The examiner can normally be reached T-F 7am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Olivia Wise can be reached at (571) 272-2249. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JERRY LIN/ Primary Examiner, Art Unit 1685