DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an image receiving module, an image recognition module , and a result feedback module in claim 9 . They appear to be software. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites “ evaluating a tremor level ” and “ to obtain a tremor level ” from an image to be recognized comprises a spiral graph used for recognizing whether a drawing person has a tremor state or no t. Under the first of the two-prong inquiry of step 2A for evaluating 101, these limitations are considered a mental process because they can be performed in the mind by mental observation or evaluation or using pen and paper, and therefore constitute a mental process. Under the second of the two-prong inquiry of step 2A, this judicial exception is not integrated into a practical application because there are no limitations that indicate improvements to the functioning of a computer or to the technology/technical field; effecting a particular treatment or prophylaxis for a disease/condition; applying the judicial exception with a particular machine (the control unit and sensor are recited with such generality that they are not considered a particular machine); effecting a transformation or reduction of a particular article to a different state/thing; applying the judicial exception in a meaningful way beyond generally linking to a particular technological environment. There are no limitations referring to any practical output or application in the claims. While the claim recites “ sending the tremor level to the user terminal ,” the step of generic output to a user terminal claimed with a high level of generality and common in the art does not constitute any type of practical application. Under step 2B for evaluating 101, the claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the other limitations present do not impose meaningful limits on the abstract idea. While the claim recites “ receiving an image to be recognized which is uploaded by a user terminal ” and “ taking the image to be recognized as an input value of a pre-trained convolutional neural network regression device ,” the claim does not put any limits on how the image must be received and therefore the receiving can merely be observed by the user, which is another type of mental process. The user terminal is also recited with a high level of generality as stated and does not add any specific structure. The convolutional neural network regression device is also considered a type of mathematical algorithm and thus calculation, and does not provide structure or significantly more than the abstract idea. Lastly, the recitation of output to the user terminal also does not constitute significantly more because the output and user terminal are generically recited. Claims 2-4 also do not add significantly more because they do not clarify the abstract idea into a practical application or that is significantly more. Claims 5-7 recite preprocessing steps for the image, which also do not clarify the abstract idea into a practical application or that is significantly more. Same with Claim 8, which merely adds detail to the well-known, routine, and conventional output. Claim 9 recites a n image receiving module , an image recognition module , and a result feedback module to perform the respective steps above. However, the components are claimed with such a high level of generality (i.e. both merely refer to software with a function per the 112(f) interpretation) that they do not constitute any specific machine or structure to perform the functions. The same is true for Claims 10-11, which merely clarify the computing and software components that implement the abstract idea and do not constitute any specific machine or structure to perform the abstract idea. Claim 12 positively recites the recognition terminal that communicates with the at least one user terminal by means of a network connection . However, these are also recited with a high level of generality and thus merely refer to generic computing components known in the art and thus do not provide a practical application or significantly more than the abstract idea. Claim Objections Claim 12 is objected to because of the following informalities: The claim should recite “ the recognition terminal is further configured to receive -- the -- image to be recognized ” ; “the recognition terminal is further configured to receive… the image to be recognized comprises -- the -- spiral graph . ” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 3-7 and 12 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In regard to Claim 3, the claim recites dependency to Claim 1. However, claim 1 does not recite “the convolutional neural network model;” that is recited in Claim 2. Therefore, the claim is indefinite. Clarification is required. In regard to Claim 4, the claim recites dependency to Claim 1. However, claim 1 does not recite the step of “ the training a set of images to be learned by means of a convolutional neural network model to obtain a convolutional neural network regression device ;” that is recited in Claim 2. Therefore, the claim is indefinite. Clarification is required. In regard to Claim 12, it is unclear if the recitation of “ a pre-trained convolutional neural network regression device” is meant to be the same or different from “a convolutional neural network regression device” previously recited. While it is assumed both refer to the same, clarification and proper antecedent basis usage is required. Claims 5-7 are rejected by dependency on Claim 4. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim s 1-2, 4, and 9 -12 a re rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Grunsten et al (US Pub No. 20210113143 ). In regard to Claim 1 , Grunsten et al disclose a method for recognizing a tremor symptom, comprising: receiving an image – user trace – to be recognized (0030) which is uploaded by a user terminal 105 (mobile device) when the user traces the pattern (0030), best seen in Figure 1-2, wherein the image to be recognized comprises a spiral graph ( Archimedean spiral – 0030) used for recognizing whether a drawing person has a tremor state or not (relating to a movement disorder) and evaluating a tremor level , i.e. severity ( abst , 0002), best seen in Figure 2 (0030) ; taking the image to be recognized as an input value of a pre-trained convolutional neural network 400, best seen in Figure 4 (0036-0040) , regression device – “ The first neural network and the second neural network can also be implemented in other manners such as logistic regression ” (0039) to obtain a tremor level – “ The machine learning system 145 is configured to output a single index indicating both the likelihood a movement disorder is present in the reconstructed user trace (e.g., a value between 0 and 1, where 0 means the user has no movement disorder and 1 means the user has a movement disorder) and the severity of the movement disorder if a movement disorder is present in the reconstructed user trace (e.g., a value of 1.5, where 1 indicates that the user has a movement disorder and 0.5 indicates the severity of the movement disorder) … The value of the single index may further have other levels of measurements or increase such as double or triple to further indicate the individual's movement disorder severity. In some embodiments, two or more separate indexes can be used ” (0035) ; and sending the tremor level to the user terminal – “ The value of the single index is transmitted to the mobile device 105 and displayed to the user showing him whether he has a movement disorder and the severity of his movement disorder if he does ” (0035). 2 . The method according to claim 1, wherein before the receiving an image to be recognized which is uploaded by a user terminal, the method further comprises: training a set of images to be learned by means of a convolutional neural network model to obtain a convolutional neural network regression device, wherein all images to be learned in the set of images to be learned each comprise a spiral graph drawn by a patient – “ The machine learning system 145 is trained by using images of traces produced by individuals over a pattern … Both set of images are supplied to the machine learning system 145 from the first neural network or the convolution layer in the first neural network. The machine learning system is trained until can distinguish the traces produced by individuals with and without a movement disorder to a very high degree of accuracy (e.g., above 90%) … the images of traces produced by individuals over a pattern are used to configure a neural network in the machine learning system to produce a category label. The category label is configured to output an index value ” (0040-0042 , 0048-0050) . 4 . The method according to claim 1, wherein before the training a set of images to be learned by means of a convolutional neural network model to obtain a convolutional neural network regression device, the method further comprises: preprocessing each of the images to be learned in the set of images to be learned – “ The resolution of the images used to train the machine learning system can be adjusted such as lowered before or after the transformation process. The resolution of the images used to train the machine learning system and the resolution of the reconstructed user trace images are preferably the same or similar ” (0048). In regard to Claim 9 , Grunsten et al disclose an apparatus for recognizing a tremor symptom, comprising: an image receiving module 12 0 – pattern receiving and conversion software application – configured to receive an image – user trace – to be recognized which is uploaded by a user terminal 105 (mobile device) when the user traces the pattern (0030) , best seen in Figure 1-2 , wherein the image to be recognized comprises a spiral graph ( Archimedean spiral – 0030) used for recognizing whether a drawing person has a tremor state or not (relating to a movement disorder) and evaluating a tremor level , i.e. severity ( abst , 0002), best seen in Figure 2 (0030) ; an image recognition module 135 – pattern processing software application (0029, 0034-0035) – configured to take the image to be recognized as an input value of a pre-trained convolutional neural network 400, best seen in Figure 4 (0036-0040), regression device – “ The first neural network and the second neural network can also be implemented in other manners such as logistic regression ” (0039) to obtain a tremor level – “ The machine learning system 145 is configured to output a single index indicating both the likelihood a movement disorder is present in the reconstructed user trace (e.g., a value between 0 and 1, where 0 means the user has no movement disorder and 1 means the user has a movement disorder) and the severity of the movement disorder if a movement disorder is present in the reconstructed user trace (e.g., a value of 1.5, where 1 indicates that the user has a movement disorder and 0.5 indicates the severity of the movement disorder) … The value of the single index may further have other levels of measurements or increase such as double or triple to further indicate the individual's movement disorder severity. In some embodiments, two or more separate indexes can be used ” (0035) ; and a result feedback module in mobile device 105 configured to send the tremor level to the user terminal – “ The value of the single index is transmitted to the mobile device 105 and displayed to the user showing him whether he has a movement disorder and the severity of his movement disorder if he does ” (0035). 10 . A recognition terminal 110 – computer system – inherently comprising a memory and a processor as well-known, routine, and conventional in the art , wherein the memory stores a computer program, and when the processor executes the computer program, the steps of the method according to claim 1 are implemented , best seen in Figure 1 (0029, 0034-0035). 11 . A computer-readable storage medium in computer system 110 , on which a computer program is stored, and when the computer program is executed by a processor inherently within computer system 110 , the steps of the method according to claim 1 are implemented , best seen in Figure 1 (0029, 0034-0035) 12 . Grunsten et al disclose a system for recognizing a tremor symptom, comprising: a recognition terminal 110 – computer system – and at least one user terminal 105 – mobile device – that execute the method for recognizing a tremor symptom according to claim 1, best seen in Figure 1 (0029, 0034-0035), wherein the recognition terminal and the at least one user terminal communicate by means of a network connection 115 , best seen in Figure 1 (0029, 0034), the recognition terminal is configured to train an image set to be learned by means of a convolutional neural network model (0036-0042) to obtain a convolutional neural network 400 regression device – “ The first neural network and the second neural network can also be implemented in other manners such as logistic regression ” (0039) , wherein all images to be learned in the image set to be learned each comprise a spiral graph drawn by a patient – “ the machine learning system is trained by using images of traces produced by individuals over a pattern without movement disorder and images of traces produced by individuals over the pattern with movement disorder caused by Parkinson's disease. The pattern being traced is preferably a spiral or Archimedean spiral ” (0041); the user terminal is configured to acquire the image to be recognized which comprises a spiral graph – Archimedean spiral , best seen in Figure 2, and send the image to be recognized to the recognition terminal , best seen in Figure 1 (0029-0030) ; the recognition terminal is further configured to receive an image to be recognized which is uploaded by the user terminal when the user traces the pattern (0030) , wherein the image to be recognized comprises a spiral graph , best seen in Figure 2, used for recognizing whether a drawing person has a tremor state or not (relating to a movement disorder) and evaluating a tremor level , i.e. severity ( abst , 0002) , is configured to take the image to be recognized as an input value of a pre-trained convolutional neural network 400 (0036-0040) regression device – “ The first neural network and the second neural network can also be implemented in other manners such as logistic regression ” (0039) to obtain a tremor level – “ The machine learning system 145 is configured to output a single index indicating both the likelihood a movement disorder is present in the reconstructed user trace (e.g., a value between 0 and 1, where 0 means the user has no movement disorder and 1 means the user has a movement disorder) and the severity of the movement disorder if a movement disorder is present in the reconstructed user trace (e.g., a value of 1.5, where 1 indicates that the user has a movement disorder and 0.5 indicates the severity of the movement disorder) … The value of the single index may further have other levels of measurements or increase such as double or triple to further indicate the individual's movement disorder severity. In some embodiments, two or more separate indexes can be used ” (0035), and is configured to send the tremor level to the user terminal – “ The value of the single index is transmitted to the mobile device 105 ” (0035) ; and the user terminal is further configured to receive and display the tremor level – “ and displayed to the user showing him whether he has a movement disorder and the severity of his movement disorder if he does ” (0035) . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Grunsten et al as applied to Claim 1, further in view of Peltonen et al (US Pub No. 20230015028 ) and Huang et al ( CN 114913572 ) . Grunsten et al disclose the invention above but do not expressly disclose the convolutional neural network model is based on a ResNet-18 backbone network, the ResNet-18 backbone network is composed of 18 parameterized layers, and the ResNet-18 backbone network comprises a convolutional layer and a full connection layer, wherein an output layer of the full connection layer performs regression analysis to evaluate accuracy. It is noted that Grunsten et al already teach the convolutional neural network model has a convolutional layer (0036) and a full connection layer (0037). Peltonen et al teach that it is well-known in the art to use ResNet-18 as a backbone network, the ResNet-18 backbone network is composed of 18 parameterized layers , and the ResNet-18 backbone network comprises a convolutional layer and a full connection layer as a well-known convolutional neural network model that can be trained as desired – “ One example of the process used to produce a CNN is to take a pretrained ResNet model, which is a residual network containing shortcut connections, such as ResNet-18, and use the convolutional layers of the model as a backbone, and replace the final non-convolutional layers with layers that suit this problem domai n. These include fully connected hidden layers, dropout layers and batch normalization layers ” (0126). Huang et al teach that it is well-known in the art to provide an output layer of the full connection layer performs regression analysis to evaluate accuracy in convolutional neural networks – “ In one embodiment, as shown in FIG. 4, expectoration detection model comprises a full connection layer, the second characteristic pattern input to the full connection layer, all connection layer in each of the neuron layer of all the all of the all of the neuron layer, The full connection layer can integrate the third convolution layer or the local information with classification of the lower sampling layer. the output value of the full connection layer is transmitted to an output, it can adopt softmax logistic regression to classify, so as to classify the face of the target object in the target image .” Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to modify the convolutional neural network model of Grunsten et al such that the convolutional neural network model is based on a ResNet-18 backbone network, the ResNet-18 backbone network is composed of 18 parameterized layers, and the ResNet-18 backbone network comprises a convolutional layer and a full connection layer, as taught by Peltonen et al as an effective convolutional neural network model for the machine learning of Peltonen et al, and wherein an output layer of the full connection layer performs regression analysis as taught by Huang et al, to provide an effective architecture for the convolutional neural network model to evaluate the accuracy since Grunsten et al already disclose a convolutional neural network regression device . Claim 5- 7 is rejected under 35 U.S.C. 103 as being unpatentable over Grunsten et al as applied to Claim 4, further in view of Ma et al (US Pub No. 20190087942 ) and Wang et al (US Pub No. 20210174553 ). In regard to Claim 5, Grunsten et al disclose the invention above including reducing the images to be learned to a present resolution (0048) and converting each of the images to be learned into a black and white image – “ the images to be transformed are transformed into a two-tone format or two-tone images (e.g., each is a black and white image) by a transformation system 140 before images are supplied to the machine learning system 145 ” (0042) , but do not expressly disclose wherein the preprocessing each of the images to be learned in the set of images to be learned specifically comprises: cropping each of the images to be learned, wherein each of the images to be learned subjected to cropping only retains a spiral graphic portion; adjusting a size of each of the images to be learned subjected to cropping according the preset resolution; normalizing each of the images to be learned subjected to size adjustment by means of histogram equalization or contrast stretching; converting each of the images to be learned subjected to normalization into a grayscale image ; and augmenting each of the images to be learned subjected to grayscale, wherein the augmented manner comprises: any one or a combination of more of random rotating, symmetrical flipping, scaling, perspective, changing brightness of images, contrast, saturation and hue, and inverting colors of given images. Ma et al teach that it is well-known in the art to process images that will have a feature detected from the images ( abst ) by cropping the images as desired so they can be matched ( 0114, 0151 , 0156, 0168 ) as well as augmenting the images by random rotating of the image to provide the ideal or desired pattern of pixels in the image as desired (0300, 0308). Wang et al teach that it is well-known in the art to process images that will have a character detected from the image ( abst ) by normalizing the image such as by contrast stretching to highlight the desired character /feature detection (0048). Wang et al also teach converting the image into grayscale or black and white to reduce storage space (0084-0086). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to preprocess each of the images to be learned in the set of images to be learned of Grunsten et al by cropping each of the images to be learned, wherein each of the images to be learned subjected to cropping such that each only retains a spiral graphic portion as taught by Ma et al as an obvious area to focus the cropping since that spiral graphic portion is the desired matching feature, then adjusting a size of each of the images to be learned subjected to cropping according the preset resolution as already taught by Grunsten et al above ; normalizing each of the images to be learned subjected to size adjustment by means of histogram equalization or contrast stretching as taught by Wang et al to highlight the desired feature, i.e. the spiral graphic portion ; converting each of the images to be learned subjected to normalization into a grayscale image as taught by Wang et al as an equally as effective format to save storage space ; and augmenting each of the images to be learned subjected to grayscale, wherein the augmented manner comprises random rotating as taught by Ma et al to provide the desired pattern of pixels in each image, i.e. so the spiral graphic portion is oriented the same way in each image. 6 . Grunsten et al in combination with Ma et al and Wang et al disclose t he method according to claim 4, wherein after the preprocessing each of the images to be learned in the set of images to be learned, the method further comprises: performing enhancement processing on each of the images to be learned and the image to be recognized , wherein it would be obvious to perform the same processing such as resolution alignment on the images to be learned as well as the image to be recognized, as already taught by Grunsten et al – “ The resolution of the images used to train the machine learning system and the resolution of the reconstructed user trace images are preferably the same or similar (if not the same, one or both of them are adjusted to the same or similar resolution). Same or similar resolution may allow the pattern processing software application to more easily determine movement disorder and severity ” (0048). 7 . Grunsten et al in combination with Wang et al in the manner above disclose the performing enhancement processing on each of the images to be learned and the image to be recognized specifically comprises: performing contrast enhancement on each of the images to be learned and the image to be recognized by means of histogram equalization, histogram specification, contrast stretching, or local contrast enhancement as taught by Wang et al (0048). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Grunsten et al as applied to Claim 1, further in view of Martodam et al (US Pub No. 20220361804 ) and Keskar (US Pub No. 20240180478 ). Grunsten et al disclose the invention above but do not expressly disclose generating a report of a tremor level according to the tremor level and the historical tremor level associated with an ID of the user terminal. Martodam et al teach that it is well-known in the art to generate a report of various kind on GUI 250 including of a historical tremor level, i.e. trend 252 , as useful data to convey to the user in an analogous tremor recognition device, best seen in Figure 6 (0105). Keskar teach that it is well-known in the art to provide patient profiles with analogous tremor recognition data each assigned a unique patient ID to effectively identify specific patient data from among others (0044). It is noted that this is well-known in the art that said patient ID is considered an ID of the user terminal. Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to modify Grunsten et al such that the invention includes generating a report of a tremor level according to the tremor level and the historical tremor level as taught by Martodam et al to provide a report to the user of the tremor level already taught by Grunsten et al as well as useful historical tremor level, and to have said report associated with an ID of the user terminal, as taught by Keskar et al , to effectively identify the patient data from among other patient data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Huong Q NGUYEN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-8340 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 10 am - 6 pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jennifer Robertson can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-5001 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.Q.N/ Examiner, Art Unit 3791 /JENNIFER ROBERTSON/ Supervisory Patent Examiner, Art Unit 3791