DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent PCT/EP2020/081560, filed on 11/10/2020. Priority is also acknowledged to U.S. provisional application 62934576, filed on 11/13/2019. The effective filing date of the instant application is therefore 11/13/2019.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 07/21/2022, 08/22/2022, and 05/07/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, all the information disclosure statements are being considered by the examiner.
Drawings
The drawings received on May 12, 2022 are accepted.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In accordance with MPEP § 2106, claims found to recite statutory subject matter (Step 1: YES) are then analyzed to determine if the claims recite any concepts that equate to an abstract idea (Step 2A, Prong 1). In the instant application, the claims recite the following limitations that equate to an abstract idea:
Claim 1 recites the limitation - analyzing the training data obtained in step i) to identify similarities in the training data and identifying a plurality of user profiles according to the similarities in the training data, wherein each set of training data comprises a variable, wherein the similarities of two sets of training data are defined by the values of the respective variables being in a specific range or being spaced apart by no more than a given threshold. Based on the broadest reasonable interpretation, analyzing the training data to identify similarities based on values compared to thresholds could include equations that could practically be done by the human mind. This draws the limitation to a mathematical concept and a mental process, which classifies the limitation as an abstract idea.
Claim 2 recites the limitation - The adjustment method according to claim 1, wherein step ii) comprises using at least one self-learning algorithm. Based on the broadest reasonable interpretation, self-leaning algorithms could include equations that could practically be done by the human mind. This draws the limitation to a mathematical concept and a mental process, which classifies the limitation as an abstract idea.
Claim 5 recites the limitation - analyzing the user-specific training data obtained in step c) and assigning the individual user to at least one individual user profile of the user profiles. Based on the broadest reasonable interpretation, assigning the individual user to at least one individual user profile could practically be done by the human mind. This draws the limitation to a mental process, which classifies the limitation as an abstract idea.
Claim 6 recites the limitation - assigning the individual user to a predefined starting profile. Based on the broadest reasonable interpretation, assigning the individual user a starting profile could practically be done by the human mind. This draws the limitation to a mental process, which classifies the limitation as an abstract idea.
Claim 9 recites the limitation - evaluating the at least one image for deriving at least one measurement value of the concentration of the analyte in the body fluid, wherein the analyte measurement is performed by using the user profile-specific measurement setup adjustments for the individual user. Based on the broadest reasonable interpretation, evaluating an image for deriving at least one measurement value of the concentration of the analyte in the body fluid could practically be done by the human mind. This draws the limitation to a mental process, which classifies the limitation as an abstract idea.
Claim 10 recites the limitation - The analytical method according to claim 5, wherein step d) comprises using at least one self-learning algorithm. Based on the broadest reasonable interpretation, self-leaning algorithms could include equations that could practically be done by the human mind. This draws the limitation to a mathematical concept and a mental process, which classifies the limitation as an abstract idea.
Claims 3-4, 7-8, and 11-13 are dependent on claims that exhibit a judicial exception and are therefore contain the judicial exception themselves.
These limitations recite concepts of analyzing, organizing and identifying information that are so generically recited that they can be practically performed in the human mind as claimed, which falls under the “Mental processes” and “Mathematical concepts” grouping of abstract ideas. These recitations are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016)), organizing and manipulating information through mathematical correlations in Digitech Image Techs., LLC v Electronics for Imaging, Inc. (758 F.3d 1344, 111 U.S.P.Q.2d 1717 (Fed. Cir. 2014)) and comparing information regarding a sample or test to a control or target data in Univ. of Utah Research Found. v. Ambry Genetics Corp. (774 F.3d 755, 113 U.S.P.Q.2d 1241 (Fed. Cir. 2014)) and Association for Molecular Pathology v. USPTO (689 F.3d 1303, 103 U.S.P.Q.2d 1681 (Fed. Cir. 2012)) that the courts have identified as concepts that can be practically performed in the human mind or mathematical relationships. Therefore, these limitations fall under the “Mental process” and “Mathematical concepts” groupings of abstract ideas. As such claims 1-13 recite an abstract idea (Step 2A, Prong 1: YES).
Claims found to recite a judicial exception under Step 2A, Prong 1 are then further analyzed to determine if the claims as a whole integrate the recited judicial exception into a practical application or not (Step 2A, Prong 2). These judicial exceptions are not integrated into a practical application because the claims do not recite an additional element that reflects an improvement to technology (MPEP § 2106.04(d)(1)). Rather, the claims provide insignificant extra-solution activity (MPEP § 2106.05(g)) and provide mere instructions to apply a judicial exception (MPEP § 2106.05(f)). Specifically, the claims recite the following additional elements:
Claim 1 recites carrying out, by a plurality of users, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements. Claim 1 also recites providing profile-specific measurement setup adjustments for at least one of the user profiles, wherein the profile-specific measurement setup adjustments refer to at least one specific setting for carrying out the at least one analyte measurement by the user, the user belonging to a specific user profile.
Claim 3 recites transmitting the training data obtained in step i) from the users' mobile devices to at least one evaluation server device, wherein at least step ii) is performed by the evaluation server device.
Claim 4 recites the profile-specific measurement setup adjustments at least partially refer to camera adjustments when carrying out step i).
Claim 5 recites using a camera of a mobile device to capture at least one image of at least a part of an optical test strip having a test field and determining at least one analyte concentration value from color formation of the test field; performing the adjustment method according to claim 1; and carrying out, by at least one individual user, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field and thereby obtaining user-specific training data on the analyte measurements.
Claim 7 recites transmitting the user-specific training data obtained in step c) from the individual user's mobile device to at least one evaluation server device.
Claim 8 recites the user profile-specific measurement setup adjustments for the individual user profile are transmitted from the evaluation server device to the individual user's mobile device.
Claim 9 recites the individual performing, using the individual user's mobile device, at least one analyte measurement, wherein the analyte measurement at least partly comprises using the camera to capture at least one image of at least a part of an optical test strip having a test field.
Claim 11 recites a receiving device configured for receiving training data on analytical measurements, the training data being obtained by a plurality of users carrying out a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements; an evaluation server configured for performing step ii) of the adjustment method according to claim 1; and a transmitter configured for transmitting profile-specific measurement setup adjustments provided in step ii).
Claim 12 recites a mobile device having at least one camera and being configured for performing the following steps: step c) of the analytical method according to claim 5; receiving the user profile-specific measurement setup adjustments for the individual user profile.
Claim 13 recites a non-transitory computer readable medium having stored thereon computer- executable instructions which, when the program is executed by a mobile device having a camera, cause the mobile device to carry out the method of claim 9.
There are no limitations that indicate that the claimed analyzing, assigning, and evaluating require anything other than generic computing systems. As such, these limitations equate to mere instructions to implement the abstract idea on a generic computer that the courts have stated does not render an abstract idea eligible in Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984. The additional elements also represent activities the courts have identified as insignificant and extra-solution. These include mere data gathering and selecting a particular data source or type of data to be manipulated (MPEP 2106.05(g)). There is no indication that these steps are affected by the judicial exceptions in any way and thus do not integrate the recited judicial exceptions into a practical application. As such, claims 1-13 are directed to an abstract idea or natural law (Step 2A, Prong 2: NO).
Claims found to be directed to a judicial exception are then further evaluated to determine if the claims recite an inventive concept that provides significantly more than the judicial exception itself (Step 2B). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite conventional additional elements that equate to mere instructions to apply the recited exception in a generic way or in a generic computing environment. The claims also recite conventional additional elements that represent insignificant extra-solution activities. The instant claims recite the following additional elements:
Claim 1 recites carrying out, by a plurality of users, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements. Claim 1 also recites providing profile-specific measurement setup adjustments for at least one of the user profiles, wherein the profile-specific measurement setup adjustments refer to at least one specific setting for carrying out the at least one analyte measurement by the user, the user belonging to a specific user profile.
Claim 3 recites transmitting the training data obtained in step i) from the users' mobile devices to at least one evaluation server device, wherein at least step ii) is performed by the evaluation server device.
Claim 4 recites the profile-specific measurement setup adjustments at least partially refer to camera adjustments when carrying out step i).
Claim 5 recites using a camera of a mobile device to capture at least one image of at least a part of an optical test strip having a test field and determining at least one analyte concentration value from color formation of the test field; performing the adjustment method according to claim 1; and carrying out, by at least one individual user, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field and thereby obtaining user-specific training data on the analyte measurements.
Claim 7 recites transmitting the user-specific training data obtained in step c) from the individual user's mobile device to at least one evaluation server device.
Claim 8 recites the user profile-specific measurement setup adjustments for the individual user profile are transmitted from the evaluation server device to the individual user's mobile device.
Claim 9 recites the individual performing, using the individual user's mobile device, at least one analyte measurement, wherein the analyte measurement at least partly comprises using the camera to capture at least one image of at least a part of an optical test strip having a test field.
Claim 11 recites a receiving device configured for receiving training data on analytical measurements, the training data being obtained by a plurality of users carrying out a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements; an evaluation server configured for performing step ii) of the adjustment method according to claim 1; and a transmitter configured for transmitting profile-specific measurement setup adjustments provided in step ii).
Claim 12 recites A mobile device having at least one camera and being configured for performing the following steps: step c) of the analytical method according to claim 5; receiving the user profile-specific measurement setup adjustments for the individual user profile.
Claim 13 recites A non-transitory computer readable medium having stored thereon computer- executable instructions which, when the program is executed by a mobile device having a camera, cause the mobile device to carry out the method of claim 9.
As discussed above, there are no additional limitations to indicate that the claimed analyzing, assigning, and evaluating require anything other than generic computer components in order to carry out the recited abstract idea in the claims. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984. MPEP 2106.05(f) discloses that mere instructions to apply the judicial exception cannot provide an inventive concept to the claims. As specified in MPEP 2106.05(g), extra-solution activities can be understood as incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Insignificant extra-solution activities include mere data gathering, selecting a particular data source or type of data to be manipulated. Some examples include performing clinical tests on individuals to obtain input for an equation (In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989)) and determining the level of a biomarker in blood (Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012)). Additionally, Budagavi (2006, Journal of Real-Time Image Processing, Vol. 1, Pages 3-7.) teaches that taking photos with a mobile device, such as a mobile phone that can transmit and receive information, is conventional (Page 3, Column 1, Paragraph 1: Mobile handheld and battery-operated consumer electronic devices such as digital still cameras, personal media players, digital camcorders, camera phones, mobile video telephones, have become very popular and Page 3, Column 2, Paragraph 1: Digital still cameras and digital camcorders cater predominantly to the traditional application of image capture. Camera phones extend this functionality by adding mobility and connectivity).
The additional elements do not comprise an inventive concept when considered individually or as an ordered combination that transforms the claimed judicial exception into a patent-eligible application of the judicial exception. Therefore, the claims do not amount to significantly more than the judicial exception itself (Step 2B: No). As such, claims 1-13 are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Shen et al. (U.S. 2016/0104057, 07/21/2022 IDS Document), in view of Tyrrell et al. (U.S. 2016/0077091). The italicized text corresponds to the reference art.
Claim 1. A method for adjusting a measurement setup used in an analytical method of determining a concentration of an analyte in a body fluid, the analytical method comprising using a camera of a mobile device to capture at least one image of at least a part of an optical test strip having a test field, and further comprising determining at least one analyte concentration value from color formation of the test field, wherein the adjustment method comprises:
i) carrying out, by a plurality of users, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements, wherein the training data include at least one of: color information derived from the images; information derived from at least one color reference card visible in the images; analyte measurement values derived from the images; sensor data obtained by using at least one sensor of the users' mobile devices selected from the group consisting of: an angle sensor, a light sensor, a motion sensor, an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a GPS sensor, a pressure sensor, a temperature sensor, and a biometric sensor; setup information relating to a setup of the users' mobile devices for carrying out the analyte measurements; and health information relating to the users;
ii) analyzing the training data obtained in step i) to identify similarities in the training data and identifying a plurality of user profiles according to the similarities in the training data, wherein each set of training data comprises a variable, wherein the similarities of two sets of training data are defined by the values of the respective variables being in a specific range or being spaced apart by no more than a given threshold, wherein the similarities identified in step ii) at least partially refer to at least one of the following: lighting conditions when performing step i); a users' tremor when performing step i); and
iii) providing profile-specific measurement setup adjustments for at least one of the user profiles, wherein the profile-specific measurement setup adjustments refer to at least one specific setting for carrying out the at least one analyte measurement by the user, the user belonging to a specific user profile, wherein the profile-specific measurement setup adjustments at least partially refer to at least one of: a handling procedure of the analyte measurement; a hardware setup of the mobile device; a software setup of the mobile device; instructions given by the mobile device to the user; a degree of reliability of measurement result of the analyte measurement; a tolerance range for admissible parameters when performing the analyte measurement; a timing sequence for performing the analyte measurement; a failsafe algorithm for performing the analyte measurement; and an enhanced analyte measurement accuracy.
Claim 2. The adjustment method according to claim 1, wherein step ii) comprises using at least one self-learning algorithm.
Claim 3. The adjustment method according to claim 1, further comprising transmitting the training data obtained in step i) from the users' mobile devices to at least one evaluation server device, wherein at least step ii) is performed by the evaluation server device.
Claim 4. The adjustment method according to claim 1, wherein the profile-specific measurement setup adjustments at least partially refer to camera adjustments when carrying out step i).
Claim 5. An analytical method of determining a concentration of an analyte in a body fluid, comprising:
a) using a camera of a mobile device to capture at least one image of at least a part of an optical test strip having a test field and determining at least one analyte concentration value from color formation of the test field;
b) performing the adjustment method according to claim 1;
c) carrying out, by at least one individual user, a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field and thereby obtaining user-specific training data on the analyte measurements;
d) analyzing the user-specific training data obtained in step c) and assigning the individual user to at least one individual user profile of the user profiles; and providing user profile-specific measurement setup adjustments for the individual user profile.
Claim 6. The analytical method according to claim 5, wherein the method further comprises, before step c): assigning the individual user to a predefined starting profile.
Claim 7. The analytical method according to claim 5, further comprising transmitting the user-specific training data obtained in step c) from the individual user's mobile device to at least one evaluation server device.
Claim 8. The analytical method according to claim 7, wherein the user profile-specific measurement setup adjustments for the individual user profile are transmitted from the evaluation server device to the individual user's mobile device.
Claim 9. The analytical method according to claim 8, further comprising:
i. the individual performing, using the individual user's mobile device, at least one analyte measurement, wherein the analyte measurement at least partly comprises using the camera to capture at least one image of at least a part of an optical test strip having a test field, and
ii. evaluating the at least one image for deriving at least one measurement value of the concentration of the analyte in the body fluid, wherein the analyte measurement is performed by using the user profile-specific measurement setup adjustments for the individual user.
Claim 10. The analytical method according to claim 5, wherein step d) comprises using at least one self-learning algorithm.
Claim 11. An adjustment system for performing the method according to claim 1, comprising:
I) a receiving device configured for receiving training data on analytical measurements, the training data being obtained by a plurality of users carrying out a plurality of analyte measurements, wherein the analyte measurements at least partly comprise using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements;
II) an evaluation server configured for performing step ii) of the adjustment method according to claim 1; and
III) a transmitter configured for transmitting profile-specific measurement setup adjustments provided in step ii).
Claim 12. A mobile device having at least one camera and being configured for performing the following steps: step c) of the analytical method according to claim 5; receiving the user profile-specific measurement setup adjustments for the individual user profile.
Claim 13. A non-transitory computer readable medium having stored thereon computer- executable instructions which, when the program is executed by a mobile device having a camera, cause the mobile device to carry out the method of claim 9.
Regarding Claim 1.ii, Shen et al. teaches analyzing training data to identify similarities and identifying user profiles according to the similarities (Page 4, Paragraph 0038: the machine learning engine can identify contextual similarities) based on comparing numerical variables against a threshold (Page 4, Paragraph 0039: a dimension reduction analysis). Shen et al. also teaches that the similarities identified can refer to lighting conditions or a users' tremor (Page 1, paragraph 00014: the machine learning engine can then compute preference profiles with contextual conditions (e.g., camera orientation or lighting condition).
Regarding Claim 1.iii, Shen et al. teaches providing profile-specific measurement setup adjustments for user profiles (Page 1, Paragraph 0014: The adjustment parameters can be used to tune an image to virtually any adjustment that can be made to an image during capture or post-processing).
Regarding Claim 2, Shen et al. teaches using at least one self-learning algorithm for Claim 1.ii. (Page 4, Paragraph 0038: the machine learning engine can identify contextual similarities).
Regarding Claim 3, Shen et al. teaches transmitting training data (raw photos and contextual information) from the users' mobile devices to an evaluation server device (Page 1, Paragraph 0012: The camera-enabled device can send the selection to a machine learning engine (e.g., implemented on the camera-enabled device, a cloud computer server, or the same device as the user interface).
Regarding Claim 4, Shen et al. teaches the profile-specific measurement setup adjustments at least partially refer to camera adjustments (Page 1, paragraph 0011: tune a digital image according to various parameters such as color saturation, white balance, exposure, lens shading, and focus location).
Regarding Claim 5.b and 5.d, Shen et al. teaches analyzing training data to identify similarities and identifying user profiles according to the similarities (Page 4, Paragraph 0038: the machine learning engine can identify contextual similarities) based on comparing numerical variables within a certain range (Page 4, Paragraph 0039: a dimension reduction analysis). Shen et al. also teaches the similarities identified can refer to lighting conditions or a users' tremor (Page 1, paragraph 00014: the machine learning engine can then compute preference profiles with contextual conditions (e.g., camera orientation or lighting condition). Shen et al. also teaches providing profile-specific measurement setup adjustments for user profiles (Page 1, Paragraph 0014: The adjustment parameters can be used to tune an image to virtually any adjustment that can be made to an image during capture or post-processing).
Regarding Claim 6, Shen also teaches assigning the individual user to a predefined starting profile (Page 5, Paragraph 0044: The machine learning engine can generate a photo preference profile for a user).
Regarding Claim 7, Shen et al. also teaches transmitting training data (raw photos and contextual information) from the users' mobile devices to an evaluation server device (Page 1, Paragraph 0012: The camera-enabled device can send the selection to a machine learning engine, e.g., implemented on the camera-enabled device, a cloud computer server, or the same device as the user interface).
Regarding Claim 8, Shen et al. also teaches the user profile-specific measurement setup adjustments for the individual user profile are transmitted from the evaluation server device to the individual user's mobile device (Page 2, Paragraph 0019: the photo preference profile is computed externally by a computer server implementing a machine learning engine and the camera-enabled device can download the photo preference profile via a network interface).
Regarding Claim 9.ii, Shen et al. also teaches the analyte measurement is performed by using the user profile-specific measurement setup adjustments for the individual user (Page 6, paragraph 0058: the computing device can provide the photo preference profile to an image processor to adjust subsequently captured photographs provided to the image processor).
Regarding Claim 10, Shen et al. teaches using a self-learning algorithm to analyze training data to identify similarities and identifying user profiles according to the similarities (Page 4, Paragraph 0038: the machine learning engine can identify contextual similarities) based on comparing numerical variables within a certain range (Page 4, Paragraph 0039: a dimension reduction analysis). Shen et al. also teaches the similarities identified can refer to lighting conditions or a users' tremor (Page 1, paragraph 00014: the machine learning engine can then compute preference profiles with contextual conditions (e.g., camera orientation or lighting condition).
Regarding Claim 11, Shen et al. also teaches a receiving device configured for receiving training data on analytical measurements, the training data being obtained by multiple users carrying out multiple analyte measurements, wherein the analyte measurements use the camera to capture images an optical test strip with a test field and an evaluation server configured for performing step ii) of the adjustment method according to claim 1 (Page 2, Paragraph 0023: when uploading a training image to the machine learning engine, the image signal processor or the general processor can provide an image-context attribute associated with the training image to the machine learning engine). Shen et al. also teaches a transmitter configured for transmitting profile-specific measurement setup adjustments provided in step ii). (Page 2, Paragraph 0019: the photo preference profile is computed externally by a computer server implementing a machine learning engine and the camera-enabled device can download the photo preference profile via a network interface).
Regarding Claim 12, Shen et al. also teaches a mobile device with a camera that can perform step c) of the analytical method according to claim 5 and receiving the user profile-specific measurement setup adjustments for the individual user profile (Page 1, Paragraphs 0002-0003: A camera-enabled device (e.g., a digital camera or a camera-enabled phone) includes a camera module which is an image capturing component of the camera-enabled device. The camera module may be integrated with control electronics and an output interface to other logic component(s) of the camera-enabled device. The camera-enabled device can further include an image processor that transforms the output of the camera module into a digital image. The image processor may process and adjust the raw photographs based on default image processing settings and calibration parameters).
Regarding Claim 13, Shen et al. also teaches a machine-readable storage medium that when executed can carry out the method of claim 9 (Page 7, paragraph 0071: Software or firmware for use in implementing the techniques introduced here may be stored on a machine-readable storage medium and may be executed).
Shen et al. does not teach using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data (Claim 1.i). Shen et al. also does not teach some of the profile specific measurement adjustments (Claim 1.iii). Shen et al. also does not teach using the camera to capture images of part of an optical test strip having a test field and data on the analyte measurements (Claims 5.a, 5.c, and 9.i).
Regarding Claim 1.i, Tyrrell et al. teaches using the camera to capture images of at least a part of an optical test strip having a test field, thereby obtaining training data on the analyte measurements (Page 2, Paragraph 0008: smart phone or other device used for capturing digital images to be used for collecting, analyzing, qualifying, quantifying, processing, validating, determining, and/or verifying test results (e.g. test strips)). Tyrrell et al. also teaches the utilization of GPS data (Page 2, Paragraph 0015), analyte data (Page 2, Paragraph 16: metabolite), information derived from a color reference card visible in the images (Page 4, Paragraph 0053: color swatches placed for calibration), color information (Page 4, Paragraph 0055: color coding can be matched), the users setup information (Page 4, paragraph 0053: determine if the hardware/camera is functioning), and health information related to the user (Page 1, paragraph 0002: a well-known application is the pregnancy test).
Regarding Claim 1.iii, Tyrrell et al. also teaches making specific suggestions to improve the accuracy of the results (Page 5, Paragraph 0063: control temperature when suggested or needed to help ensure accurate test results), that include timing sequence for performing the measurements (Page 5, Paragraph 0063: if cold, add an incubation time for the device), handling procedure (Page 11, Paragraph 0149: ensures proper orientation), and providing instructions that include setup and operation (Page 2, paragraph 0014: Providing instructions via a GUI).
Regarding Claim 5.a and 5.c, Tyrrell et al. teaches using the camera to capture images of an optical test strip having a test field, thereby obtaining training data on the analyte measurements (Page 2, Paragraph 0008: smart phone or other device used for capturing digital images to be used for collecting, analyzing, qualifying, quantifying, processing, validating, determining, and/or verifying test results (e.g. test strips)).
Regarding Claim 9.i, Tyrrell et al. teaches using the camera to capture images of an optical test strip having a test field, thereby obtaining training data on the analyte measurements (Page 2, Paragraph 0008: smart phone or other device used for capturing digital images to be used for collecting, analyzing, qualifying, quantifying, processing, validating, determining, and/or verifying test results (e.g. test strips)).
An invention would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date of the invention if some teaching in the prior art would have led that person to combine the prior art teachings to arrive at the claimed invention. Tyrrell et al. and Shen et al. utilize metadata associated with images to alter images. Shen et al. uses data associated with images in combination with a machine learning framework to enhance generic images. Tyrrell et al. seeks to improve conditions associated with taking pictures of optical test strips in order to enhance determining the test result. It would be obvious by one of ordinary skill in the art at the time of the effective filing date to combine the analytical framework of Shen et al. with the test strip image system of Tyrrell et al. in order to harness the capabilities of self-learning algorithms to improve the reading of test results captured by the image of a test. Therefore, it would have been obvious to someone of ordinary skill in the art at the time of the effective filling date to combine the methods from both of the references indicated above.
Furthermore, one of ordinary skill in the art would predict that the methods taught by Shen et al. could be readily added to the methods of Tyrrell et al. with a reasonable expectation of success because they both utilize images and data associated with images as inputs in order to alter an image. Furthermore, Madabhushi and Lee (2016, Medical Image Analysis, Vol. 33, Pgs. 170-175) teach the theories employed by Shen et al. (machine learning framework) and Tyrrell et al. (image calibration) are well established in the field of medicine (Page 170, Paragraph 1: The ability to mine sub-visual image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance) and therefore would have been more likely to succeed together. Accordingly, claims 1-13 taken as a whole would have been prima facie obvious before the effective filing date and are therefore rejected under 35 USC 103.
Conclusion
No claims are allowed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BLAKE H ELKINS whose telephone number is (571)272-2649. The examiner can normally be reached Monday-Friday 8-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at (571) 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.H.E./Examiner, Art Unit 1687
/Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687