DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the estimated speed" in line 13. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated speed” is referring to the “speed” previously introduced in line 6 of Claim 1, or a separate element.
Claim 1 recites the limitation "the estimated grade" in line 13. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated grade” is referring to the “grade” previously introduced in line 10 of Claim 1, or a separate element.
Claim 6 recites the limitation "the horizontal and vertical velocity intersect" in lines 3-4. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitation "the ground" in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitation "the horizontal velocity, vertical velocity and vertical displacement amplitudes" in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim 7 recites “a motion sensor of the device” in line 8. It is unclear as to whether this limitation is referring to the previously introduced “at least one motion sensor of the device” from Claim 1, or a separate element.
Claim 8 recites “the speed of the user” in line 9. It is unclear as to whether this limitation is referring to the previously introduced “a speed of the user” from line 6 of Claim 1, “the estimated speed" from line 13 of Claim 1, or a separate element.
Claim 10 recites the limitation "the estimated speed" in line 12. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated speed” is referring to the “speed” previously introduced in line 7 of Claim 10, or a separate element.
Claim 10 recites the limitation "the estimated grade" in lines 12-13. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated grade” is referring to the “grade” previously introduced in line 10 of Claim 10, or a separate element.
Claim 15 recites the limitation "the horizontal and vertical velocity intersect" in lines 3-4. There is insufficient antecedent basis for this limitation in the claim.
Claim 15 recites the limitation "the ground" in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 15 recites the limitation "the horizontal velocity, vertical velocity and vertical displacement amplitudes" in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 recites “a motion sensor of the device” in line 8. It is unclear as to whether this limitation is referring to the previously introduced “at least one motion sensor of the device” from Claim 10, or a separate element.
Claim 17 recites “the speed of the user” in line 9. It is unclear as to whether this limitation is referring to the previously introduced “a speed of the user” from line 7 of Claim 10, “the estimated speed" from line 12 of Claim 10, or a separate element.
Claim 19 recites the limitation "the estimated speed" in line 11. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated speed” is referring to the “speed” previously introduced in line 6 of Claim 19, or a separate element.
Claim 19 recites the limitation "the estimated grade" in lines 11-12. There is insufficient antecedent basis for this limitation in the claim. Furthermore, it is unclear as to whether the “estimated grade” is referring to the “grade” previously introduced in line 9 of Claim 19, or a separate element.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Each of Claims 1-20 has been analyzed to determine whether it is directed to any judicial exceptions.
Step 1
Claims 1-9 recite a series of steps or acts for determining energy expenditure. Thus, the claims are directed to a process, which is one of the statutory categories of invention.
Claims 10-20 recite a system and non-transitory, computer-readable storage medium for determining energy expenditure. Thus, the claims are directed to a machine, which is one of the statutory categories of invention.
Step 2A, Prong 1
Each of Claims 1-20 recites at least one step or instruction for determining energy expenditure, which is grouped as a mental process under the 2019 PEG or a certain method of organizing human activity under the 2019 PEG. The claimed steps of determining can be practically performed in the human mind using mental steps or basic critical thinking, which are types of activities that have been found by the courts to represent abstract ideas.
Accordingly, each of Claims 1-20 recites an abstract idea.
Specifically, Claim 1 recites:
obtaining, with at least one processor of a device, face tracking data associated with a user;
determining, with the at least one processor, a step cadence of the user based on the face tracking data;
determining, with the at least one processor, a speed of the user based on the step cadence and a stride length of the user;
obtaining, with the at least one processor, device motion data from at least one motion sensor of the device;
determining, with the at least one processor, a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and
determining, with the at least one processor, an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
Specifically, Claim 10 recites:
at least one processor;
memory storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations comprising:
obtaining face tracking data associated with a user;
determining a step cadence of the user based on the face tracking data;
determining a speed of the user based on the step cadence and a stride length of the user;
obtaining device motion data from at least one motion sensor of the device;
determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and
determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
Specifically, Claim 19 recites:
obtaining face tracking data associated with a user;
determining a step cadence of the user based on the face tracking data;
determining a speed of the user based on the step cadence and a stride length of the user;
obtaining device motion data from at least one motion sensor of the device;
determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and
determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
Further, dependent Claims 2-9, 11-18, and 20 merely include limitations that either further define the abstract idea (and thus don’t make the abstract idea any less abstract) or amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they’re merely incidental or token additions to the claims that do not alter or affect how the process steps are performed.
Accordingly, as indicated above, each of the above-identified claims recites an abstract idea.
Step 2A, Prong 2
The above-identified abstract idea in each of independent Claims 1, 10, and 19 (and their respective dependent Claims 2-9, 11-18, and 20) is not integrated into a practical application under 2019 PEG because the additional elements (identified above in independent Claims 1, 10, and 19), either alone or in combination, generally link the use of the above-identified abstract idea to a particular technological environment or field of use. More specifically, the additional elements of: “at least one processor”, “motion sensor”, “camera”, “mobile phone”, “global navigation satellite system (GNSS) receiver”, and “a non-transitory, computer-readable storage medium” are generically recited computer elements in independent Claims 1, 10, and 19 (and their respective dependent claims) which do not improve the functioning of a computer, or any other technology or technical field. Nor do these above-identified additional elements serve to apply the above-identified abstract idea with, or by use of, a particular machine, effect a transformation or apply or use the above-identified abstract idea in some other meaningful way beyond generally linking the use thereof to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Furthermore, the above-identified additional elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. For at least these reasons, the abstract idea identified above in independent Claims 1, 10, and 19 (and their respective dependent claims) is not integrated into a practical application under 2019 PEG.
Moreover, the above-identified abstract idea is not integrated into a practical application under 2019 PEG because the claimed method and system merely implements the above-identified abstract idea (e.g., mental process and certain method of organizing human activity) using rules (e.g., computer instructions) executed by a computer (e.g., “processor” as claimed). In other words, these claims are merely directed to an abstract idea with additional generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. Additionally, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. That is, like Affinity Labs of Tex. v. DirecTV, LLC, the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution. Thus, for these additional reasons, the abstract idea identified above in independent Claims 1, 10, and 19 (and their respective dependent claims) is not integrated into a practical application under the 2019 PEG.
Accordingly, independent Claims 1, 10, and 19 (and their respective dependent claims) are each directed to an abstract idea under 2019 PEG.
Step 2B
None of Claims 1-20 include additional elements that are sufficient to amount to significantly more than the abstract idea for at least the following reasons.
These claims require the additional elements of: “at least one processor”, “motion sensor”, “camera”, “mobile phone”, “global navigation satellite system (GNSS) receiver”, and “a non-transitory, computer-readable storage medium”. The above-identified additional elements are generically claimed computer components which enable the above-identified abstract idea(s) to be conducted by performing the basic functions of automating mental tasks. The courts have recognized such computer functions as well understood, routine, and conventional functions when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. See, Versata Dev. Group, Inc. v. SAP Am., Inc. , 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Those in the relevant field of art would recognize the above-identified additional elements as being well-understood, routine, and conventional means for data-gathering and computing, as demonstrated by the Applicant’s specification (e.g. paragraphs [0050]-[0060]) which discloses that the processor(s) comprise generic computer components that are configured to perform the generic computer functions (e.g. determining) that are well-understood, routine, and conventional activities previously known to the pertinent industry; and the Applicant’s Background in the specification.
Accordingly, in light of Applicant’s specification, the claimed term “processor” is reasonably construed as a generic computing device. Like SAP America vs Investpic, LLC (Federal Circuit 2018), it is clear, from the claims themselves and the specification, that these limitations require no improved computer resources, just already available computers, with their already available basic functions, to use as tools in executing the claimed process.
Furthermore, Applicant’s specification does not describe any special programming or algorithms required for “the at least one processor”. This lack of disclosure is acceptable under 35 U.S.C. §112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the computer arts. By omitting any specialized programming or algorithms, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the computer industry or arts. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional elements because it describes these additional elements in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a) (see Berkheimer memo from April 19, 2018, (III)(A)(1) on page 3). Adding hardware that performs “‘well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible (TLI Communications).
The recitation of the above-identified additional limitations in Claims 1-20 amounts to mere instructions to implement the abstract idea on a computer. Simply using a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); and TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Moreover, implementing an abstract idea on a generic computer, does not add significantly more, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer.
A claim that purports to improve computer capabilities or to improve an existing technology may provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); and Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). However, a technical explanation as to how to implement the invention should be present in the specification for any assertion that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Here, Applicant’s specification does not include any discussion of how the claimed invention provides a technical improvement realized by these claims over the prior art or any explanation of a technical problem having an unconventional technical solution that is expressed in these claims. Instead, as in Affinity Labs of Tex. v. DirecTV, LLC 838 F.3d 1253, 1263-64, 120 USPQ2d 1201, 1207-08 (Fed. Cir. 2016), the specification fails to provide sufficient details regarding the manner in which the claimed invention accomplishes any technical improvement or solution.
For at least the above reasons, the method, system, and medium of Claims 1-20 are directed to applying an abstract idea as identified above on a general purpose computer without (i) improving the performance of the computer itself, or (ii) providing a technical solution to a problem in a technical field. None of Claims 1-20 provides meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that these claims amount to significantly more than the abstract idea itself.
Taking the additional elements individually and in combination, the additional elements do not provide significantly more. Specifically, when viewed individually, the above-identified additional elements in independent Claims 1, 10, and 19 (and their dependent claims) do not add significantly more because they are simply an attempt to limit the abstract idea to a particular technological environment. That is, neither the general computer elements nor any other additional element adds meaningful limitations to the abstract idea because these additional elements represent insignificant extra-solution activity. When viewed as a combination, these above-identified additional elements simply instruct the practitioner to implement the claimed functions with well-understood, routine and conventional activity specified at a high level of generality in a particular technological environment. As such, there is no inventive concept sufficient to transform the claimed subject matter into a patent-eligible application. When viewed as whole, the above-identified additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Thus, Claims 1-20 merely apply an abstract idea to a computer and do not (i) improve the performance of the computer itself (as in Bascom and Enfish), or (ii) provide a technical solution to a problem in a technical field (as in DDR).
Therefore, none of the Claims 1-20 amounts to significantly more than the abstract idea itself. Accordingly, Claims 1-20 are not patent eligible and rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-6, 10-12, 14-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura (U.S. Publication No. 2015/0002648) in view of Raghuram et al (U.S. Publication No. 2016/0058372).
Regarding Claim 1, Kawamura discloses a method comprising:
obtaining, with at least one processor of a device, face tracking data associated with a user (The measuring apparatus 20 is herein used while standing against a portion of the treadmill 10 near an operating panel 11. The measuring apparatus 20 sequentially acquires images including the face of the person 30 who is regarded as a subject, detects the position of the face of the person 30 in the acquired images, and measures a continuous motional state of the person 30 based on the detected position of the face of the person 30 in the images; [0018]);
determining, with the at least one processor, a step cadence of the user based on the face tracking data (By monitoring the motions of the face in this manner, a pitch [bpm] (number of steps per minute) indicating the number of steps walked in unit time can be measured; [0032]);
determining, with the at least one processor, a speed of the user based on the step cadence and a stride length of the user (When a length of stride is input in the mobile phone in advance, data of stride, speed, and distance can be measured; [0032]);
Although Kawamura discloses a measuring unit 213, which has the function to measure the continuous motional state of the person 30 ([0026]), Kawamura fails to specifically disclose obtaining, with the at least one processor, device motion data from at least one motion sensor of the device; determining, with the at least one processor, a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and determining, with the at least one processor, an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
In a similar technical field, Raghuram teaches a method for calculating a type of terrain using a fitness tracking device (Abstract), comprising:
obtaining, with the at least one processor, device motion data from at least one motion sensor of the device (the fitness tracking device 100 may also include the motion sensing module 220. The motion sensing module 220 may include one or more motion sensors, such as an accelerometer or a gyroscope; [0058]);
determining, with the at least one processor, a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data (motion data obtained by the motion sensing module 220 may be used for a variety of purposes, including work rate modeling, automatic activity classification, pedometry, posture detection, cycling terrain identification, etc.; [0058]; data from one or more motion sensors (e.g., an altimeter) may be used to estimate a current slope of the surface; [0176]); and
determining, with the at least one processor, an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model (estimate a rate of energy expenditure of the user by applying a calorimetry model including a coefficient or a parameter associated with the type of the terrain; [0028]; “Calorimetry-Automatic Terrain Detection”; [0188-0198]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the terrain teachings of Raghuram into the invention of Kawamura in order to enable the work rate expenditure model to include an additional efficiency parameter related to terrain, as some terrain types (or types of surfaces) are easier to run on than others and may be more efficient (Raghuram [0189]).
Regarding Claim 2, Kawamura discloses capturing, with a camera of the device, video data of the user's face; and generating, with the at least one processor, the face tracking data from the video data (the control unit 21 sequentially acquires the images including the face of the person 30, and detects positions of the face of the person 30 in the acquired images. Meanwhile, the control unit 21 measures the continuous motional state of the person 30 based on the detected positions of the face of the person 30 in the images; [0044]).
Regarding Claim 3, Kawamura discloses wherein the device is a mobile phone and the camera is a front-facing camera of the mobile phone (A measuring apparatus 20 of the present embodiment may be realized, for example, by a mobile phone which is mounted at a position where the face of a person 30, who has been running or walking on the treadmill 10, can be monitored by an in-camera (illustrated as an image pickup unit 23 in FIG. 2 described below); [0018]).
Regarding Claim 5, Kawamura discloses wherein determining, with the at least one processor, the step cadence of the user based on the face tracking data further comprises: extracting features indicative of a step from a window of the face tracking data (The measuring unit 213 may have a function to measure, as the motional state, changes in position of the face of the person 30 in the images detected by the position detecting unit 212. The measuring unit 213 may also have a function to measure, as a notional state, a pitch number of the person 30 according to periodic changes in a vertical direction of the position of the face of the person 30 in the images. Further, the measuring unit 213 may have a function to measure a shift in position in a horizontal direction of the face of the person 30 in each image as a change in the motional state; [0027]); and
computing the step cadence based on the extracted features (Specifically, the in-camera of the mobile phone is used as the motional state measuring apparatus 20 to measure the pitch by detecting periodical movements of the face of the person 30. It may also be possible to additionally measure the vertical and horizontal movements of the face as well. By doing this, the continuous motional state of the person 30 can be measured properly even when the continuous movement may change due to fatigue, for example, of the person 30; [0044]).
Regarding Claim 6, Kawamura discloses wherein the features include at least one of the following features: 1) one period in vertical displacement and a half a period of horizontal displacement; 2) one horizontal velocity cusp and vertical velocity cusp within each step; 3) the horizontal and vertical velocity intersect near a time of a foot strike where the user's foot is touching the ground; or 4) the horizontal velocity, vertical velocity and vertical displacement amplitudes exceed specified thresholds (the number of pitches of the person 30 is measured as a motional state according to a periodic vertical change in the positions of the face of the person 30 in the images. The measuring unit 213 performs measurement other than step number measurement. The principle of the step number measurement is illustrated in FIG. 4A. As described above, the number of steps can be measured by monitoring the periodic motion of the face. As an index to indicate whether the running or walking form is proper, a vertical deviation (FIG. 4B) and a horizontal deviation (FIG. 4C) are provided. These deviations can also be measured by the image processing (both in unit [cm]); [0035]).
Regarding Claim 10, Kawamura discloses a system comprising:
at least one processor; memory storing instructions that when executed by the at least one processor, cause the at least one processor to perform operations (a storage medium of one aspect of the present invention is a non-volatile recording medium storing a computer-readable program for causing a computer to execute: a procedure to sequentially acquire images, a procedure to detect positions of a particular body part of a person in the respective acquired images, and a procedure to measure a continuous motional state of the person based on the detected positions of the particular body part; [0007]) comprising:
obtaining face tracking data associated with a user (The measuring apparatus 20 is herein used while standing against a portion of the treadmill 10 near an operating panel 11. The measuring apparatus 20 sequentially acquires images including the face of the person 30 who is regarded as a subject, detects the position of the face of the person 30 in the acquired images, and measures a continuous motional state of the person 30 based on the detected position of the face of the person 30 in the images; [0018]);
determining a step cadence of the user based on the face tracking data (By monitoring the motions of the face in this manner, a pitch [bpm] (number of steps per minute) indicating the number of steps walked in unit time can be measured; [0032]);
determining a speed of the user based on the step cadence and a stride length of the user (When a length of stride is input in the mobile phone in advance, data of stride, speed, and distance can be measured; [0032]);
Although Kawamura discloses a measuring unit 213, which has the function to measure the continuous motional state of the person 30 ([0026]), Kawamura fails to specifically disclose obtaining device motion data from at least one motion sensor of the device; determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
In a similar technical field, Raghuram teaches a method for calculating a type of terrain using a fitness tracking device (Abstract), comprising:
obtaining device motion data from at least one motion sensor of the device (the fitness tracking device 100 may also include the motion sensing module 220. The motion sensing module 220 may include one or more motion sensors, such as an accelerometer or a gyroscope; [0058]);
determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data (motion data obtained by the motion sensing module 220 may be used for a variety of purposes, including work rate modeling, automatic activity classification, pedometry, posture detection, cycling terrain identification, etc.; [0058]; data from one or more motion sensors (e.g., an altimeter) may be used to estimate a current slope of the surface; [0176]); and
determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model (estimate a rate of energy expenditure of the user by applying a calorimetry model including a coefficient or a parameter associated with the type of the terrain; [0028]; “Calorimetry-Automatic Terrain Detection”; [0188-0198]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the terrain teachings of Raghuram into the invention of Kawamura in order to enable the work rate expenditure model to include an additional efficiency parameter related to terrain, as some terrain types (or types of surfaces) are easier to run on than others and may be more efficient (Raghuram [0189]).
Regarding Claim 11, Kawamura discloses wherein the operations further comprise: capturing, with a camera of the device, video data of the user's face; and generating, with the at least one processor, the face tracking data from the video data (the control unit 21 sequentially acquires the images including the face of the person 30, and detects positions of the face of the person 30 in the acquired images. Meanwhile, the control unit 21 measures the continuous motional state of the person 30 based on the detected positions of the face of the person 30 in the images; [0044]).
Regarding Claim 12, Kawamura discloses wherein the device is a mobile phone and the camera is a front-facing camera of the mobile phone (A measuring apparatus 20 of the present embodiment may be realized, for example, by a mobile phone which is mounted at a position where the face of a person 30, who has been running or walking on the treadmill 10, can be monitored by an in-camera (illustrated as an image pickup unit 23 in FIG. 2 described below); [0018]).
Regarding Claim 14, Kawamura discloses wherein determining the step cadence of the user based on the face tracking data further comprises: extracting features indicative of a step from a window of the face tracking data (The measuring unit 213 may have a function to measure, as the motional state, changes in position of the face of the person 30 in the images detected by the position detecting unit 212. The measuring unit 213 may also have a function to measure, as a notional state, a pitch number of the person 30 according to periodic changes in a vertical direction of the position of the face of the person 30 in the images. Further, the measuring unit 213 may have a function to measure a shift in position in a horizontal direction of the face of the person 30 in each image as a change in the motional state; [0027]); and
computing the step cadence based on the extracted features (Specifically, the in-camera of the mobile phone is used as the motional state measuring apparatus 20 to measure the pitch by detecting periodical movements of the face of the person 30. It may also be possible to additionally measure the vertical and horizontal movements of the face as well. By doing this, the continuous motional state of the person 30 can be measured properly even when the continuous movement may change due to fatigue, for example, of the person 30; [0044]).
Regarding Claim 15, Kawamura discloses wherein the features include at least one of the following features: 1) one period in vertical displacement and a half a period of horizontal displacement; 2) one horizontal velocity cusp and vertical velocity cusp within each step; 3) the horizontal and vertical velocity intersect near a time of a foot strike where the user's foot is touching the ground; or 4) the horizontal velocity, vertical velocity and vertical displacement amplitudes exceed specified thresholds (the number of pitches of the person 30 is measured as a motional state according to a periodic vertical change in the positions of the face of the person 30 in the images. The measuring unit 213 performs measurement other than step number measurement. The principle of the step number measurement is illustrated in FIG. 4A. As described above, the number of steps can be measured by monitoring the periodic motion of the face. As an index to indicate whether the running or walking form is proper, a vertical deviation (FIG. 4B) and a horizontal deviation (FIG. 4C) are provided. These deviations can also be measured by the image processing (both in unit [cm]); [0035]).
Regarding Claim 19, Kawamura discloses a non-transitory, computer-readable storage medium having stored thereon instructions, that when executed by at least one processor, causes the at least one processor to perform operations (a storage medium of one aspect of the present invention is a non-volatile recording medium storing a computer-readable program for causing a computer to execute: a procedure to sequentially acquire images, a procedure to detect positions of a particular body part of a person in the respective acquired images, and a procedure to measure a continuous motional state of the person based on the detected positions of the particular body part; [0007]) comprising:
obtaining face tracking data associated with a user (The measuring apparatus 20 is herein used while standing against a portion of the treadmill 10 near an operating panel 11. The measuring apparatus 20 sequentially acquires images including the face of the person 30 who is regarded as a subject, detects the position of the face of the person 30 in the acquired images, and measures a continuous motional state of the person 30 based on the detected position of the face of the person 30 in the images; [0018]);
determining a step cadence of the user based on the face tracking data (By monitoring the motions of the face in this manner, a pitch [bpm] (number of steps per minute) indicating the number of steps walked in unit time can be measured; [0032]);
determining a speed of the user based on the step cadence and a stride length of the user (When a length of stride is input in the mobile phone in advance, data of stride, speed, and distance can be measured; [0032]);
Although Kawamura discloses a measuring unit 213, which has the function to measure the continuous motional state of the person 30 ([0026]), Kawamura fails to specifically disclose obtaining device motion data from at least one motion sensor of the device; determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data; and determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model.
In a similar technical field, Raghuram teaches a method for calculating a type of terrain using a fitness tracking device (Abstract), comprising:
obtaining device motion data from at least one motion sensor of the device (the fitness tracking device 100 may also include the motion sensing module 220. The motion sensing module 220 may include one or more motion sensors, such as an accelerometer or a gyroscope; [0058]);
determining a grade of a surface on which the user is walking or running based on at least one of the device motion data or the face tracking data (motion data obtained by the motion sensing module 220 may be used for a variety of purposes, including work rate modeling, automatic activity classification, pedometry, posture detection, cycling terrain identification, etc.; [0058]; data from one or more motion sensors (e.g., an altimeter) may be used to estimate a current slope of the surface; [0176]); and
determining an energy expenditure of the user based on the estimated speed, the estimated grade and a caloric expenditure model (estimate a rate of energy expenditure of the user by applying a calorimetry model including a coefficient or a parameter associated with the type of the terrain; [0028]; “Calorimetry-Automatic Terrain Detection”; [0188-0198]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the terrain teachings of Raghuram into the invention of Kawamura in order to enable the work rate expenditure model to include an additional efficiency parameter related to terrain, as some terrain types (or types of surfaces) are easier to run on than others and may be more efficient (Raghuram [0189]).
Regarding Claim 20, Kawamura discloses wherein determining the step cadence of the user based on the face tracking data further comprises: extracting features indicative of a step from a window of the face tracking data (The measuring unit 213 may have a function to measure, as the motional state, changes in position of the face of the person 30 in the images detected by the position detecting unit 212. The measuring unit 213 may also have a function to measure, as a notional state, a pitch number of the person 30 according to periodic changes in a vertical direction of the position of the face of the person 30 in the images. Further, the measuring unit 213 may have a function to measure a shift in position in a horizontal direction of the face of the person 30 in each image as a change in the motional state; [0027]); and
computing the step cadence based on the extracted features (Specifically, the in-camera of the mobile phone is used as the motional state measuring apparatus 20 to measure the pitch by detecting periodical movements of the face of the person 30. It may also be possible to additionally measure the vertical and horizontal movements of the face as well. By doing this, the continuous motional state of the person 30 can be measured properly even when the continuous movement may change due to fatigue, for example, of the person 30; [0044]).
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura and Raghuram as applied to claims 1 and 10 above, and further in view of McCready et al (U.S. Publication No. 2014/0274567).
Regarding Claim 4, Kawamura and Raghuram fail to disclose correcting, with the at least one processor, the face tracking data to remove vertical face motion due to the user nodding their head.
In a similar technical field, McCready teaches an adaptable exercise system and method (Abstract), comprising: correcting, with the at least one processor, the face tracking data to remove vertical face motion due to the user nodding their head (Therefore, exercise tracker 16 is responsible for: a) determining when to trust the periodicity output of the video tracker, and when to apply a correction factor relative to the cadence as a function of what type of exercise is being performed; and b) how to interpret the motion (e.g. horizontal for bikes, vertical for treadmills), and c) if both camera 15 and accelerometer 18 are used in combination, when to defer to the vibrational energy measurement to provide smoother and more responsive exercise output to the host application 12; [0040]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the correction teachings of McCready into those of Kawamura and Raghuram in order to correctly interpret the motion to provide smoother and more responsive exercise output (McCready [0189]).
Regarding Claim 13, Kawamura and Raghuram fail to disclose correcting, with the at least one processor, the face tracking data to remove face motion caused by the user nodding their head.
In a similar technical field, McCready teaches an adaptable exercise system and method (Abstract), comprising: correcting, with the at least one processor, the face tracking data to remove face motion caused by the user nodding their head (Therefore, exercise tracker 16 is responsible for: a) determining when to trust the periodicity output of the video tracker, and when to apply a correction factor relative to the cadence as a function of what type of exercise is being performed; and b) how to interpret the motion (e.g. horizontal for bikes, vertical for treadmills), and c) if both camera 15 and accelerometer 18 are used in combination, when to defer to the vibrational energy measurement to provide smoother and more responsive exercise output to the host application 12; [0040]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the correction teachings of McCready into those of Kawamura and Raghuram in order to correctly interpret the motion to provide smoother and more responsive exercise output (McCready [0189]).
Claims 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kawamura and Raghuram as applied to claims 1 and 10 above, and further in view of Bhatt et al (U.S. Publication No. 2017/0356770).
Regarding Claim 8, Kawamura and Raghuram fail to disclose computing, with the at least one processor, an uncalibrated stride length of the user based at least in part on a height of the user; computing, with the at least one processor, an uncalibrated distance by multiplying the step cadence and the uncalibrated stride length; computing, with the at least one processor, a calibration factor by dividing a truth distance by the uncalibrated distance, and then multiplying the uncalibrated stride length by the calibration factor to get a calibrated stride length; and computing, with the at least one processor, the speed of the user by multiplying the step cadence by the calibrated stride length.
In a similar technical field, Bhatt teaches a method and apparatus for determining, recommending, and applying a calibration parameter for activity measurement (Abstract), comprising: computing, with the at least one processor, an uncalibrated stride length of the user based at least in part on a height of the user (an estimated stride length which is based on e.g., the user's height as determined from the user profile information of step 202; [0041]); computing, with the at least one processor, an uncalibrated distance by multiplying the step cadence and the uncalibrated stride length (distance may be derived by multiplying the collected step data by an estimated stride length (derived based on height); [0057]); computing, with the at least one processor, a calibration factor by dividing a truth distance by the uncalibrated distance (calibration may comprise a percentage variance or difference such as between a known distance and a measured distance; [0032]), and then multiplying the uncalibrated stride length by the calibration factor to get a calibrated stride length (the recommended calibration parameter comprises a recommended stride length which is determined by comparing a distance derived using an estimated stride length to a distance derived from geo-position data; [0046]); and computing, with the at least one processor, the speed of the user by multiplying the step cadence by the calibrated stride length (activity data is collected via an activity monitoring device 102. The activity data may comprise data relating to walking, running, rowing, cycling, or any activity involving distance including e.g., a number of steps taken, a distance travelled, an average speed; [0042]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the calibration teachings of Bhatt into those of Kawamura and Raghuram in order to enable the user to easily adjust a calibration parameter to cause collected data to more accurately represent the measured activity. Furthermore, devices that are able to provide a mechanism for calibration adjustment as disclosed herein can operate more efficiently to provide accurate distance measurements which meet a user's standard which assists the user in viewing health-parameter data and establishing and maintaining healthy lifestyle patterns (Bhatt [0080]).
Regarding Claim 9, Raghuram discloses wherein the truth distance is obtained or derived from a global navigation satellite system (GNSS) receiver (the motion sensing module 220 may include an altimeter, or other types of location sensors, such as a GPS sensor; [0059]). Bhatt also discloses wherein the truth distance is obtained or derived from a global navigation satellite system (GNSS) receiver (calibration factor is determined based on a difference between a measured distance and a distance determined via position data (e.g., GPS data, GIS/map data, satellite data, etc.) from the positioning/location system 110. Hence, the system 110 itself may comprise a GPS system, GIS system, one or more satellite, one or more databases of geographic information or other information useful in providing an accurate calculation of a distance and/or position or location; [0036]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the GPS teachings of Bhatt and Raghuram into the invention of Kawamura in order to improve the accuracy of the speed estimates with GPS location data (Raghuram [0295]) and in order to provide an accurate calculation of a distance and/or position or location (Bhatt [0036]).
Regarding Claim 17, Kawamura and Raghuram fail to disclose wherein the operations further comprise: computing an uncalibrated stride length of the user based at least in part on a height of the user; computing an uncalibrated distance by multiplying the step cadence and the uncalibrated stride length; computing a calibration factor by dividing a truth distance by the uncalibrated distance, and then multiplying the uncalibrated stride length by the calibration factor to get a calibrated stride length; and computing the speed of the user by multiplying the step cadence by the calibrated stride length.
In a similar technical field, Bhatt teaches a method and apparatus for determining, recommending, and applying a calibration parameter for activity measurement (Abstract), wherein the operations further comprise:
computing an uncalibrated stride length of the user based at least in part on a height of the user (an estimated stride length which is based on e.g., the user's height as determined from the user profile information of step 202; [0041]);
computing an uncalibrated distance by multiplying the step cadence and the uncalibrated stride length (distance may be derived by multiplying the collected step data by an estimated stride length (derived based on height); [0057]);
computing a calibration factor by dividing a truth distance by the uncalibrated distance (calibration may comprise a percentage variance or difference such as between a known distance and a measured distance; [0032]), and then multiplying the uncalibrated stride length by the calibration factor to get a calibrated stride length (the recommended calibration parameter comprises a recommended stride length which is determined by comparing a distance derived using an estimated stride length to a distance derived from geo-position data; [0046]); and
computing the speed of the user by multiplying the step cadence by the calibrated stride length (activity data is collected via an activity monitoring device 102. The activity data may comprise data relating to walking, running, rowing, cycling, or any activity involving distance including e.g., a number of steps taken, a distance travelled, an average speed; [0042]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the calibration teachings of Bhatt into those of Kawamura and Raghuram in order to enable the user to easily adjust a calibration parameter to cause collected data to more accurately represent the measured activity. Furthermore, devices that are able to provide a mechanism for calibration adjustment as disclosed herein can operate more efficiently to provide accurate distance measurements which meet a user's standard which assists the user in viewing health-parameter data and establishing and maintaining healthy lifestyle patterns (Bhatt [0080]).
Regarding Claim 18, Raghuram discloses wherein the truth distance is obtained or derived from a global navigation satellite system (GNSS) receiver (the motion sensing module 220 may include an altimeter, or other types of location sensors, such as a GPS sensor; [0059]). Bhatt also discloses wherein the truth distance is obtained or derived from a global navigation satellite system (GNSS) receiver (calibration factor is determined based on a difference between a measured distance and a distance determined via position data (e.g., GPS data, GIS/map data, satellite data, etc.) from the positioning/location system 110. Hence, the system 110 itself may comprise a GPS system, GIS system, one or more satellite, one or more databases of geographic information or other information useful in providing an accurate calculation of a distance and/or position or location; [0036]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the GPS teachings of Bhatt and Raghuram into the invention of Kawamura in order to improve the accuracy of the speed estimates with GPS location data (Raghuram [0295]) and in order to provide an accurate calculation of a distance and/or position or location (Bhatt [0036]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANEL J JHIN whose telephone number is (571) 272-2695. The examiner can normally be reached on Monday-Friday 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Valvis can be reached on 571-272-4233. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANEL J JHIN/Examiner, Art Unit 3791