DETAILED ACTION
This action is responsive to the claim amendments and Applicant’s Remarks filed 7 November 2025. The Examiner acknowledges the amendments to claims 1-2, 8, 11, 18, and 20. Claims 1-4, 7-14, and 17-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Examiner Notes: currently, NO limitation invokes interpretation under § 112(f).
Claim Rejections - 35 USC § 112
Examiner’s Note Regarding Machine Learning: The Examiner’s note regarding sufficient written description support for the machine learning-based classifier model on p. 3 of the Non-Final Rejection dated 15 August 2025 is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-4, 7-14, and 17-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Each claim has been analyzed to determine whether it is directed to any judicial exceptions.
Representative claim(s) 1 [representing all independent claims] recite(s):
A wearable electronic device configured to be worn around a user finger, the wearable electronic device comprising:
a ring-shaped housing configured to receive the user finger;
a motion sensor including a geomagnetic sensor;
an audio sensor;
a display;
a memory storing instructions;
a touch sensor; and
at least one processor electrically connected to the motion sensor, the audio sensor, and the memory,
wherein the instructions, when executed by the at least one processor individually or collectively, cause the wearable electronic device to:
obtain motion sensing information via the motion sensor,
obtain an audio signal corresponding to the motion sensing information via the audio sensor,
identify a tooth-brushing hand motion type corresponding to the motion sensing information by using at least one pre-stored tooth-brushing hand motion information,
identify an audio signal pattern corresponding to the tooth-brushing hand motion type from the obtained audio signal by using at least one pre-stored tooth-brushing sound pattern information,
identify a start of tooth-brushing when a designated periodic motion pattern is identified based on the motion sensing information by using at least one pre-stored motion pattern information,
identify, based on the tooth-brushing hand motion type and the audio signal pattern, a tooth-brushing hand motion corresponding to the motion sensing information and the audio signal by using at least one pre-stored tooth-brushing hand motion information, and
correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by:
obtaining, via the touch sensor, touch sensing information including a wearing state and a wearing position of the wearable electronic device on the user finger based on identifying the start of tooth-brushing,
determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor, and
based on the amount of rotation, correcting the at least one axis of the motion sensors.
(Emphasis added: abstract idea, additional element)
Step 2A Prong 1
Representative claim(s) 1 recites the following abstract ideas, which may be performed in the mind or by hand with the assistance of pen and paper:
“identify a tooth-brushing hand motion type corresponding to the motion sensing information by using at least one pre-stored tooth-brushing hand motion information” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom based on a known or previously identified correlation [Applicant’s Specification p. 12]
“identify an audio signal pattern corresponding to the tooth-brushing hand motion type from the obtained audio signal by using at least one pre-stored tooth-brushing sound pattern information” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom based on a known or previously identified correlation [Applicant’s Specification p. 15]
“identify a start of tooth-brushing when a designated periodic motion pattern is identified based on the motion sensing information by using at least one pre-stored motion pattern information” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom [Applicant’s Specification p. 30]
“identify, based on the tooth-brushing hand motion type and the audio signal pattern, a tooth-brushing hand motion corresponding to the motion sensing information and the audio signal by using at least one pre-stored tooth-brushing hand motion information” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom based on a known or previously identified correlation [Applicant’s Specification p. 16]
“correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by: obtaining… touch sensing information including a wearing state and a wearing position of the wearable electronic device on the user finger based on identifying the start of tooth-brushing” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom [Applicant’s Specification p. 31]
“correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by:… determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom [Applicant’s Specification p. 31]
“correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by:… based on the amount of rotation, correcting the at least one axis of the motion sensors” – may be performed by merely observing known or collected data and drawing mental conclusions therefrom and/or applying known or derived mathematical processes on at least a limited amount of data [Applicant’s Specification p. 31]
If a claim, under BRI, covers performance of the limitations in the mind but for the mere recitation of extra-solutionary activity (and otherwise generic computer elements) then the claim falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea under Step 2A Prong 1 of the Mayo framework as set forth in the 2019 PEG.
No limitations are provided that would force the complexity of any of the identified evaluation steps to be non-performable by pen-and-paper practice.
Alternatively or additionally, these steps describe the concept of using implicit mathematical formula(s) [i.e., “correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by: obtaining touch sensing information including a wearing state and a wearing position of the wearable electronic device on the user finger based on identifying the start of tooth-brushing”, “determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor”, “based on the amount of rotation, correcting the at least one axis of the motion sensors”] to derive a conclusion based on input of data, which corresponds to concepts identified as abstract ideas by the courts [Diamond v. Diehr. 450 U.S. 175, 209 U.S.P.Q. 1 (1981), Parker v. Flook. 437 U.S. 584, 19 U.S.P.Q. 193 (1978), and In re Grams. 888 F.2d 835, 12 U.S.P.Q.2d 1824 (Fed. Cir. 1989)]. The concept of the recited limitations identified as mathematical concepts above is not meaningfully different than those mathematical concepts found by the courts to be abstract ideas.
The dependent claims merely include limitations that either further define the abstract idea [e.g. limitations relating to the data gathered or particular steps which are entirely embodied in the mental process] and amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they are merely incidental or token additions to the claims that do not alter or affect how the process steps are performed.
Thus, these concepts are similar to court decisions of abstract ideas of itself: collecting, displaying, and manipulating data [Int. Ventures v. Cap One Financial], collecting information, analyzing it, and displaying certain results of the collection and analysis [Electric Power Group], collection, storage, and recognition of data [Smart Systems Innovations].
Step 2A Prong 2
The judicial exception is not integrated into a practical application.
Representative claim 1 only recites additional elements of extra-solutionary activity – in particular, extra-solution activity [generic computer functions of a display, memory, processor; data gathering steps to obtain motion sensing information, audio signals, touch sensing information, geomagnetic direction information] – without further sufficient detail that would tie the abstract portions of the claim into a specific practical application (2019 PEG p. 55 – the instant claim, for example does not tie into a particular machine, a sufficiently particular form of data or signal collection – via the claimed extra-solution activity, or a sufficiently particular form of display or computing architecture/structure).
Dependent claim(s) ) 2-3, 7, 12-13, and 17 merely add detail to the abstract portions of the claim but do not otherwise encompass any additional elements which tie the claim(s) into a particular application/integration [the dependent claim(s) recite steps which encompass mere computer instructions to carry out n otherwise wholly abstract idea].
Dependent claim(s) 8-9 and 18-19 encounter substantially the same issues as the independent claim(s) from which they depend in that they encompass further generic extra-solutionary activity [generic data gathering] and/or generic computer elements [storage, memory per se].
Accordingly, the claim(s) are not integrated into a practical application under Step 2A Prong 2.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Independent claims 1, 11, and 20 as individual wholes fail to amount to significantly more than the judicial exception at Step 2B. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of extra-solutionary activity [i.e., generic computer functions, data gathering, etc.] and generic computer elements cannot amount to significantly more than an abstract idea [MPEP § 2106.05(f)] and is further considered to merely implement an abstract idea on a generic computer [MPEP § 2106.05(d)(II) establishes computer-based elements which are considered to be well understood, routine, and conventional when recited at a high level of generality].
For the independent claim portions and dependent claims which provide additional elements of extra-solutionary data gathering, MPEP § 2106.05(g) establishes that mere data gathering for determining a result does not amount to significantly more. The extra-solutionary activity of processor steps (acquiring/transmitting signals, etc.), displaying as presently recited, cannot provide an inventive concept which amounts to significantly more than the recited abstract idea.
For the independent claims as well as the dependent claims merely reciting generic computer elements and functions [display, memory, one or more processors, recited at a high level of generality, for performing generic functions therein], MPEP § 2106.05(d)(II) establishes computer-based elements which are considered to be well understood, routine, and conventional when recited at a high level of generality.
Accordingly, the generic computer elements and generic functions therein, as presently limited, cannot provide an inventive concept since they fall under a generic structure and/or function that does not add a meaningful additional feature to the judicial exception(s) of the claim(s).
Claim(s) 1, 11, and 20 recite “A wearable electronic device configured to be worn around a user finger, the wearable electronic device comprising: a ring-shaped housing configured to receive the user finger; a motion sensor including a geomagnetic sensor; an audio sensor; a display;… a touch sensor”, wherein claim(s) 4 and 14 further disclose that “the motion sensor includes an acceleration sensor and a gyro sensor”. Such a wearable electronic device is considered well-understood, routine, and conventional, as known by at least:
Applicant’s disclosure is not particular regarding the particular structure of the generically claimed wearable electronic device, and recites the wearable electronic device at a high level of generality [Wearable electronic devices have become increasingly commonplace as these electronic devices provide conveniences to the users. For example, wearable electronic devices may be implemented in various types, including accessories such as eyeglasses, watches, and rings, clothes, or body implants, and may collect and provide detailed information regarding peripheral environments or physical changes of individuals in real time (Applicant’s Specification p. 1)] wherein the Examiner further notes that each of the claimed sensors that comprise the wearable electronic device are also recited at a high level of generality [the motion sensor212 may include an acceleration sensor and/or a gyro sensor and may further include a geomagnetic sensor. The acceleration sensor may sense the impact or acceleration experienced by the electronic device201 or caused by the movement of the body of the user wearing the electronic device201. The gyro sensor may sense a rotation angle or a rotation direction of the electronic device201, which is experienced by the electronic device201 or caused by the movement of the body of the user wearing the electronic device201. The geomagnetic sensor may sense direction of geomagnetism (Applicant’s Specification p. 12); The audio module170 may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170 may obtain the sound via the input module150, or output the sound via the sound output module155 or an external electronic device (e.g., an electronic device102 (e.g., a speaker or a headphone)) directly or wirelessly coupled with the electronic device101 (Applicant’s Specification p. 8); The touch sensor216 according to an embodiment may detect touches. For example, the touch sensor216 may detect a touch caused by the contact of the body of the user. The touch sensor216 according to an embodiment may be included in the display260 or may be included in the electronic device201 as a separate element (Applicant’s Specification p. 13)]. This lack of disclosure is acceptable under 35 U.S.C. 112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the medical technology arts. Thus, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the field of wearable technology. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional element because it describes such an additional element in a manner that indicates that the additional element is sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. 112(a) [see Berkheimer memo from April 19, 2018, Page 3, (III)(A)(1), not attached]. Adding hardware that performs “well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible [TLI Communications].
Laput (US-20210063434-A1, previously presented) [FIG. 1A illustrates an example wearable device 150 (e.g., a watch) that can include an integrated touch screen 152, one or more sensors 160 (e.g., accelerometer, microphone, light sensor, etc.), and processing circuitry programmed to detect individual health related events (e.g., handwashing events) according to examples of the disclosure… It is understood that although wearable device 150 includes a touch screen, the detection of individual health related events described herein can be applied to a device without or without a touch-sensitive or a non-touch-sensitive display. It is understood that wearable device 150 illustrated in FIG. 1A is one example of a wearable device, but the detection of individual health related events can be implemented in part or entirely in other wearable devices (e.g., ring, smart band, health band, finger-cuff, wrist-cuff, glove, etc.) (Laput ¶0015); Sensors circuitry 211 can be coupled to various sensors including, but not limited to,… one or more magnetometers, one or more accelerometers (e.g., corresponding to accelerometer(s) 106 in FIG. 1B), one or more gyroscopes, one or more inertial measurement units (IMUs) or one or more IMU sub-components (Laput ¶0025)]
Ryu (US-20190313367-A1, previously presented) [The at least one wearable device 120 may be embodied, for example, as a smart watch, smart glasses, smart earphones, smart shoes, a smart ring, a smart bracelet, etc. (Ryu ¶0110); For example, a touch screen may be disposed on the display unit 2710, a temperature sensor or a heart rate sensor may be disposed on the inner circumferential surface 2730, or an acceleration sensor, a position detection module, a motion sensor, an illumination sensor, of the like may be disposed on the body 2720. A touch sensor may be disposed on the outer circumferential surface 2740 (Ryu ¶0244); Referring to FIG. 69, the electronic device 110c or the at least one wearable device 120c may include at least one among a display unit 6910, a controller 6970, a memory 6920, a GPS chip 6925, a communication unit 6930, a video processor 6935, an audio processor 6940, a user input unit 6945, a microphone unit 6950, an imaging unit 6955, a speaker unit 6960, and a motion sensor 6965 (Ryu ¶0430); In this case, the motion sensor 6965 may sense the features of motions of the main bodies of the electronic device 110c and the at least one wearable device 120c, such as a direction and angle of rotation, an inclination, etc., by using at least one among various sensors such as a geomagnetic sensor, a gyrosensor, and an acceleration sensor (Ryu ¶0453)]
Zhao (US-20170251268-A1, previously presented) [In some embodiments, wearable devices 606 and 608 are watches, though the embodiments of this disclosure are optionally implemented with user devices other than watches or wearable devices (e.g., any device that is associated with a user profile such as a fitness band, smart ring, smart glasses, etc). Watches 606 and 608 optionally correspond to device 100 or device 300 described above with reference to FIGS. 1A-1B and 2-3 (Zhao ¶0136); Device 100 includes… audio circuitry 110, speaker 111, microphone 113… Device 100 optionally includes one or more intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300) (Zhao ¶0021); Device 100 optionally includes, in addition to accelerometer(s) 168, a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 100 (Zhao ¶0041)]
Martin (US-11106309-B1, effective filing date of 7 January 2021, previously presented) [In some implementations, the system may be integrated in electronic devices, such as… smart rings (Martin Col 5:18-20); In some implementations, a haptic actuator, LEDs, a microphone array, and/or speakers can be included, among other elements as part of the user interface 160 (Martin Col 9:44-46); The inertial measurement unit (IMU) can be used to detect the position of the wrist of the user in reference to the position of the user, and rotate the display in the top interface 230 to adjust to the detected position. In some implementations, the IMU can include accelerometers, gyroscopes, magnetometers, or any combination thereof. Data from the accelerometer and the geomagnetic field sensor (e.g., magnetometer) can be used to determine the device's physical position in the world's frame of reference (Martin Col 21:61-Col 22:2)]
Claims 9 and 19 recite “communication circuitry”. Such “communication circuitry” is considered well-understood, routine, and conventional, as known by at least:
Applicant’s disclosure is not particular regarding the particular structure of the generically claimed “communication module”, and recites that the communication module may comprise modules for performing known types of data communication [the communication module 219 may include a cellular communication module, an ultra-wide band (UWB) communication module, a Bluetooth communication module, and/or a wireless fidelity (WiFi) communication module, and in addition, may further include other modules which can communicate with the external electronic device102 (Applicant’s Specification p. 13)]. This lack of disclosure is acceptable under 35 U.S.C. 112(a) since this hardware performs non-specialized functions known by those of ordinary skill in the medical technology arts. Thus, Applicant's specification essentially admits that this hardware is conventional and performs well understood, routine and conventional activities in the data communication. In other words, Applicant’s specification demonstrates the well-understood, routine, conventional nature of the above-identified additional element because it describes such an additional element in a manner that indicates that the additional element is sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. 112(a) [see Berkheimer memo from April 19, 2018, Page 3, (III)(A)(1), not attached]. Adding hardware that performs “well understood, routine, conventional activit[ies]’ previously known to the industry” will not make claims patent-eligible [TLI Communications].
Claim 10 recites “a machine learning-based classifier model”. Such a “machine learning-based classifier model” is considered well-understood, routine, and conventional, as known by at least:
Hu (“Intelligent Sensor Networks”, NPL previously presented) [In supervised learning, the learner is provided with labeled input data. This data contains a sequence of input/output pairs of the form xi, yi, where xi is a possible input and yi is the correctly labeled output associated with it. The aim of the learner in supervised learning is to learn the mapping from inputs to outputs. The learning program is expected to learn a function f that accounts for the input/output pairs seen so far, f (xi) = yi, for all i. This function f is called a classifier if the output is discrete and a regression function if the output is continuous. The job of the classifier/regression function is to correctly predict the outputs of inputs it has not seen before (Hu p. 5)]
Huang (“Kernel Based Algorithms for Mining Huge Data Sets”, NPL previously presented) [In supervised learning, the learner is provided with labeled input data. This data contains a sequence of input/output pairs of the form xi, yi, where xi is a possible input and yi is the correctly labeled output associated with it. The aim of the learner in supervised learning is to learn the mapping from inputs to outputs. The learning program is expected to learn a function f that accounts for the input/output pairs seen so far, f (xi) = yi, for all i. This function f is called a classifier if the output is discrete and a regression function if the output is continuous. The job of the classifier/regression function is to correctly predict the outputs of inputs it has not seen before (Huang p. 1)]
Mitchell (“The Discipline of Machine Learning”, NPL previously presented) [For example, we now have a variety of algorithms for supervised learning of classification and regression functions; that is, for learning some initially unknown function f : X [Calibri font/0xE0] Y given a set of labeled training examples {xi; yi} of inputs xi and outputs yi = f(xi) (Mitchell p. 3-4)]
Examiner’s Note Regarding Particular Treatment or Prophylaxis: Claim(s) 8 and 10 recite subject matter regarding “displaying information related to tooth-brushing” [claim 8] and “compare tooth-brushing hand motion type information previously obtained by using a machine learning-based classifier model with feature information extracted from the motion sensing information to identify the tooth-brushing hand motion type” [claim 10], which the Examiner notes is not considered to be a particular treatment or prophylaxis, as none of the identified claims positively recite or include language that is considered to be a particular treatment or prophylaxis as an additional element to integrate the judicial exception into a practical application or allow the identified claims to amount to significantly more than the judicial exception [MPEP § 2106.04(d)(2)].
Accordingly, the claim(s) as whole(s) fail amount to significantly more than the judicial exception under Step 2B.
Subject Matter Not Taught By Prior Art
The following is a statement of reasons for the indication of subject matter considered to not be taught by prior art:
Regarding claim 1, the closest prior art of reference are Huang, Serval, and Kienzle, wherein the Examiner notes that the combination of Huang in view of Serval and Kienzle as previously applied teaches each and every limitation [see p. 18-24 of Non-Final Rejection dated 15 August 2025] except the amended subject matter wherein the processor causes the wearable electronic device to: “correct, based on identifying the start of tooth-brushing, at least one axis of the motion sensor by: obtaining, via the touch sensor, touch sensing information including a wearing state and a wearing position of the wearable electronic device on the user finger based on identifying the start of toothbrushing, determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger, relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor, and based on the amount of rotation, correcting the at least one axis of the motion sensor”. Huang, Serval, and Kienzle, alone and in combination fail to expressly teach, disclose, or suggest the identified limitation(s), as there is no explicit disclosure in any of the cited references regarding “determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger, relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor, and based on the amount of rotation, correcting the at least one axis of the motion sensor”. It would not have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Huang in view of Serval and Kienzle as previously applied to employ the limitation(s) identified as not being taught, disclosed, or suggested by the previously cited references without the benefit of hindsight. Claims 11 and 20 are considered to similarly not be taught, disclosed, or suggested by any cited prior art reference mutatis mutandis due to similar claimed subject matter. Dependent claims 2-4, 7-10, 12-14, and 17-19 are considered to not be taught, disclosed, or suggested by any cited prior art reference due to their respective incorporations of claims 1 and 11.
Response to Arguments
Applicant’s arguments, see Applicant’s Remarks p. 9, filed 7 November 2025, with respect to the previously presented objections to claims 11 and 20 have been fully considered and are persuasive. The objections to claims 11 and 20 have been withdrawn.
Applicant’s arguments, see Applicant’s Remarks p. 9-12, with respect to the previously applied rejection(s) of claims 1, 11, 20, and those dependent therefrom under § 103 have been fully considered and are persuasive. The rejections under § 103 of claims 1, 11, 20, and those dependent therefrom have been withdrawn.
Applicant's arguments, see Applicant’s Remarks p. 13, with respect to the previously applied rejection(s) of claims 1, 11, 20, and those dependent therefrom under § 101 have been fully considered but they are not persuasive.
The Applicant asserts that the amended limitation “determining an amount of rotation of the wearable electronic device worn around the user finger about an axis through the user finger, relative to a predetermined reference point on the user finger, based on the touch sensing information and geomagnetic direction information sensed by the geomagnetic sensor, and based on the amount of rotation, correcting the at least one axis of the motion sensor” provides a technical solution and requires non-trivial computation and physical sensors, and are not performable by a human with pen and paper, specifically noting the correction of the sensing axis. However, the Examiner disagrees with the Applicant’s arguments, as the Examiner notes that as described in the Applicant’s Specification, the correction of the sensing axis comprises applying a rotation matrix to correct data from the motion sensor [based on the touch sensing information sensed via the touch sensor, when the second wearable device302 is worn on a finger, the processor220 may identify how much it is rotated with respect to a preset wearing reference point based on the touch information and configure, based on the rotation state, a rotation matrix to rotate and correct each axes of the motion sensors212 so as to correct axis information (Applicant’s Specification p. 31)], which is considered to be performable in the in the mind or by hand for at least a limited amount of data. Furthermore, the Examiner notes that the alleged technical solution is recited within limitations that have been identified as being abstract ideas implemented on a generic computer with additional elements that are considered to be well-understood, routine, and conventional. As such, under MPEP 2106.05(a), "an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology". Specifically, the "improvements" analysis in Step 2A determines whether the claim pertains to an improvement to the functioning of a computer or to another technology without reference to what is well-understood, routine, conventional activity [MPEP § 2106.04(d)(1)]. It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception [MPEP § 2106.05(a)]. It is important to note that in order for a method claim to improve computer functionality, the broadest reasonable interpretation of the claim must be limited to computer implementation. That is, a claim whose entire scope can be performed mentally, cannot be said to improve computer technology. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 120 USPQ2d 1473 (Fed. Cir. 2016) (a method of translating a logic circuit into a hardware component description of a logic circuit was found to be ineligible because the method did not employ a computer and a skilled artisan could perform all the steps mentally) [MPEP § 2106.05(a)(I)]. As such, the claims do not recite additional elements that may integrate the abstract ideas into a practical application of the abstract ideas, and thus the claimed invention is not considered to improve other technology or technical field.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Ashbrook (US-20120075173-A1) discloses a wearable electronic device comprising a ring-shaped housing configured to receive a user finger, wherein the device further comprises a touch sensor configured to determine an amount of rotation of the wearable electronic device worn around the user finger about an axis of the user finger [Ashbrook ¶0033, Figs. 3-5]
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEVERO ANTONIO P LOPEZ whose telephone number is (571)272-7378. The examiner can normally be reached M-F 9-6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Marmor II can be reached at (571) 272-4730. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES A MARMOR II/Supervisory Patent Examiner
Art Unit 3791
/S.P.L./Examiner, Art Unit 3791