Prosecution Insights
Last updated: April 17, 2026
Application No. 18/118,869

ARTIFICIAL INTELLIGENCE FOR DETECTING A MEDICAL CONDITION USING FACIAL IMAGES

Final Rejection §103
Filed
Mar 08, 2023
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
unknown
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments The amendments and arguments filed 10/27/2025 have been entered and made of record. The Applicant's amendments and arguments filed 10/27/2025 have been considered but are moot in view of the new ground(s) of rejection because the Applicant has similarly amended at least independent claim 1 with introducing new limitation including “three dimensional facial image”: Re Claim 1, the newly added limitation “three dimensional facial image” has been considered, however, which is rejected by WANG (CN 111368672 A) as modified by JIANG (WO 2020113326 A1), and further in view of a new reference CHENG (US 20200327308 A1, Date Filed: 2020-05-21), because: CHENG discloses analysis of facial features {such as from medical imaging} from three dimensional facial image (see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); WANG (as modified by JIANG) and CHENG are combinable as they are in the same field of endeavor: automatic facial features analysis from image data with neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG (as modified by JIANG)’s method using CHENG’s teachings by including analysis of facial features {such as extracting features from medical imaging} from three dimensional facial image to WANG (as modified by JIANG)’s facial features and region detection and facial recognition in order to generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network (see CHENG: e.g. in abstract, and [0081]-[0083]). Therefore, claims 1-19 are still not patentably distinguishable over the prior art reference(s). Further discussions are addressed in the prior art rejection section below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over WANG (CN 111368672 A), and in view of JIANG (WO 2020113326 A1), and further in view of a new reference CHENG (US 20200327308 A1, Date Filed: 2020-05-21). Re Claim 1, WANG discloses a method comprising using at least one hardware processor to (see WANG: e.g., --a construction method and device for genetic disease facial recognition model. the construction method of the genetic facial recognition model by using the first face image, using the first loss function training deep learning neural network, obtaining pre-training model, and in response to the network update operation in the pre-training model parameter. obtaining the updated pre-training model, then using small amount of second face image, using the pre-training model after second loss function training update, until the second loss function to the minimum value can be based on the genetic facial recognition model of human face image detecting genetic risk of collection, and the genetic can improve the detection accuracy of the model under the condition of little amount of patient data.--, in abstract, and, --A computer device comprising a memory and a processor, the memory storing a computer program, the steps when the processor executes the computer program to realize the above method. A computer readable storage medium on which a computer program is stored, the step for realizing the above method when the computer program is executed by a processor. said face recognition model construction method for genetic, genetic risk prediction method, a device, a computer device, and a storage medium, by using the first face image, using the first loss function training deep learning neural network. obtaining the pre-training model, and in response to the network update operation in the pre-training model parameter, obtaining the updated pre-training model, then using small amount of second face image, using the pre-training model second updated loss function training, to the minimum value until the second loss function, can be based on the collected human face image detecting genetic risk of genetic disease facial recognition model, and the genetic condition of less amount of patient data and can improve the detection accuracy of the model.--, in pages 3-4 of English version of CN 111368672 A as provided in This Office Action): train an artificial intelligence to predict at least one clinical parameter or medical condition from facial images by training a first convolutional neural network to detect facial regions and features in each facial image (see WANG: e.g., --obtaining the to-be-predicted image data; human face using a multi-task convolutional neural network identifies the human face area in the prediction image data, and detecting the face area of confidence; extracting the face confidence degree human face region meets the set value of, and for image quality detection to the face area; human face region genetic face recognition model constructed by the above method of the image quality meets the requirement for genetic and facial recognition, risk prediction result of outputting face region on each of the genetic type. in one embodiment, for image quality detection to the face area, the face area comprises: moire detection, if moire is not detected, it is determined that the image quality of the face region satisfies the requirement, otherwise the image quality of determining face area does not meet the requirement. in one embodiment, the output face area risk prediction result on each disease type, the method further comprises: generating a human face corresponding to heat and output the risk prediction results on each genetic type according to the face region. A genetic construction device of the face recognition model, comprising: training data obtaining module for obtaining a first training data set and second training data set, wherein the first training data set comprising a plurality of the first face image, the second training data set comprising a second face image of patients suffering from genetic disorders, provided with an inherited type tag in the second face image; a first training module, for using the first training data set, using the first loss function training deep learning neural network, obtaining pre-training model; model updating module for responding to the update operation in network parameter of the pre-training model, obtaining the pre-training model after updating; the second training module, used for using the second training data set, using the pre-training model after second loss function training update, until the second loss function to the minimum value to obtain genetic facial recognition model--, in pages 2-3, and page 6 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action); WANG however does not explicitly teach above facial images are three-dimensional facial images, CHENG discloses analysis of facial features {such as from medical imaging} from three dimensional facial image (see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); WANG and CHENG are combinable as they are in the same field of endeavor: automatic facial features analysis from image data with neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG’s method using CHENG’s teachings by including analysis of facial features {such as extracting features from medical imaging} from three dimensional facial image to WANG’s facial features and region detection and facial recognition in order to generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network (see CHENG: e.g. in abstract, and [0081]-[0083]). WANG as modified by CHENG however still do not explicitly teach detecting facial landmarks in each facial image, JIANG discloses {using convolutional neural network (CNN) comprises a deep neural network} detecting facial landmarks in each three-dimensional facial image (see JIANG: e.g., -- a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses for each of a plurality (N) of respective skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective skin sign diagnoses and wherein the CNN is trained using skin sign data for each of the N respective skin signs; and a processing unit coupled to the storage unit configured to receive the image and process the image using the CNN to generate the N respective skin sign diagnoses.--, in [0005]-[0006], and, -- a face and landmark detector to pre-process the image and the processing unit may be configured to generate a normalized image from the image using the face and landmark detector and use the normalized image when using the CNN.--, in [0010], and, -- a facial landmark detector 316, (such as is described in V. Kazemi, J. Sullivan, One millisecond face alignment with an ensemble of regression trees, in: IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1867-1874, incorporated herein by reference) on a source input image 318 to normalize the face (outputting image 202) based on detected landmarks. In this way, an input facial image x is always an upright frontal image of a face at a fixed scale.--, in [0044], and also see JIANG: e.g., -- It first connects the feature net (ResNet50 or MobileNet) by a fully connected layer with input size as the feature size after pooling (e.g. 1x1x2048 or 1x1x1280, respectively) and output size 50--, in [0035]-[0037] {herein, the feature vectors are three dimensional}; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); WANG (as modified by CHENG) and JIANG are combinable as they are in the same field of endeavor: automatic facial features analysis from image data with neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG (as modified by CHENG)’s method using JIANG’s teachings by including {using convolutional neural network (CNN) comprises a deep neural network} detecting facial landmarks in each facial image to WANG (as modified by CHENG)’s facial features and region detection and facial recognition in order to normalize the face based on detected landmarks (see JIANG: e.g. in [0005]-[0006], and [0044]-[0045]); WANG as modified by CHENG and JIANG further disclose training a second convolutional neural network to predict one or more global features from each three-dimensional facial image (see WANG: e.g., -- the second training module, used for using the second training data set, using the pre-training model after second loss function training update, until the second loss function to the minimum value to obtain genetic facial recognition model--, in pages 2-3, and page 6 of English version of CN 111368672 A as provided in This Office Action; and, -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); generating a facial-omics model to predict one or more local features from each three-dimensional facial image (see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see JIANG: e.g., -- In addition to model validation on image-based scores for selfies data, validation is also performed on a subset of the test subjects for which the dermatologists were able to score the skin condition signs in person. Expert dermatologists received visits from 68 subjects (around 12 experts per subject), and assessed them live, without regard to the subject image-based scores. Similarly to image- based analysis, the mean absolute error was calculated for each skin condition sign, for: 1) the model in system 200, by comparing the prediction from the model to the average experts’ score for the sign for the particular test subject, and 2) for expert in person assessment, by comparing each expert’s score vector to the average experts’ score vector for this subject.--, in [0054], [0063]-[0064], and, -- the deep neural network model may be configured as a depthwise separable convolution neural network comprising convolutions in which individual standard convolutions are factorized into a depthwise convolution and a pointwise convolution. The depthwise convolution is limited to applying a single filter to each input channel and the pointwise convolution is limited to combining outputs of the depthwise convolution.--, in [0075]-[0077], [0080], and [0085]; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); and training a classification model to predict the at least one clinical parameter or medical condition based on the one or more global features and the one or more local features (see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; and, -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action); and operating the trained artificial intelligence by, for each of a plurality of three-dimensional facial images (see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; and, -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); receiving the three-dimensional facial image, applying the first convolutional neural network to identify the plurality of facial landmarks in the three-dimensional facial image aligning the three-dimensional facial image to a template based on the identified plurality of facial landmarks, applying the second convolutional neural network to the aligned three-dimensional facial image to predict the one or more global features (see JIANG: e.g., -- a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses for each of a plurality (N) of respective skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective skin sign diagnoses and wherein the CNN is trained using skin sign data for each of the N respective skin signs; and a processing unit coupled to the storage unit configured to receive the image and process the image using the CNN to generate the N respective skin sign diagnoses.--, in [0005]-[0006], and, -- a face and landmark detector to pre-process the image and the processing unit may be configured to generate a normalized image from the image using the face and landmark detector and use the normalized image when using the CNN.--, in [0010], and, -- a facial landmark detector 316, (such as is described in V. Kazemi, J. Sullivan, One millisecond face alignment with an ensemble of regression trees, in: IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1867-1874, incorporated herein by reference) on a source input image 318 to normalize the face (outputting image 202) based on detected landmarks. In this way, an input facial image x is always an upright frontal image of a face at a fixed scale.--, in [0044]; and, --o link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject. The skin diagnosis of the skin signs can be used to drive a further conclusion regarding apparent age such as based on more than 95% of perception from others. In the presence of make-up, the skin diagnosis and further prediction/classification regarding a global appearance or attractiveness may be used to measure effectiveness and establish an impact of foundation, etc. to mask skin aging signs and how lines and structure of the face could be recovered.--, in [0096]-[0087]; also see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; and, -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); applying the facial-omics model to the aligned facial image to predict the one or more local features, and applying the classification model to the one or more global features and the one or more local features to generate a prediction of the at least one clinical parameter or medical condition for the three-dimensional facial image (see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; and, -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see JIANG: e.g., -- In addition to model validation on image-based scores for selfies data, validation is also performed on a subset of the test subjects for which the dermatologists were able to score the skin condition signs in person. Expert dermatologists received visits from 68 subjects (around 12 experts per subject), and assessed them live, without regard to the subject image-based scores. Similarly to image- based analysis, the mean absolute error was calculated for each skin condition sign, for: 1) the model in system 200, by comparing the prediction from the model to the average experts’ score for the sign for the particular test subject, and 2) for expert in person assessment, by comparing each expert’s score vector to the average experts’ score vector for this subject.--, in [0054], [0063]-[0064], and, -- the deep neural network model may be configured as a depthwise separable convolution neural network comprising convolutions in which individual standard convolutions are factorized into a depthwise convolution and a pointwise convolution. The depthwise convolution is limited to applying a single filter to each input channel and the pointwise convolution is limited to combining outputs of the depthwise convolution.--, in [0075]-[0077], [0080], and [0085]; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]). Re Claim 2, WANG as modified by CHENG, and JIANG further disclose wherein receiving the three-dimensional facial image comprises receiving the three-dimensional facial image from a mobile device, which captured the three-dimensional facial image, over at least one network (see JIANG: e.g., -- a computing device may be configured as a training environment to train neural network model 514 for example using the network as shown in Fig. 3 along with appropriate training and/or testing data. [0076] The deep neural network may be adapted to a light architecture for a computing device that is a mobile device (e.g. a smartphone or tablet) having fewer processing resources than a“larger” device such as a laptop, desktop, workstation, server or other comparable generation computing device.--, in [0075]-[0077]; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]). Re Claim 3, WANG as modified by CHENG and JIANG further disclose herein one or both of the first convolutional neural network and the second convolutional neural network comprise a deep convolutional neural network (see WANG: e.g., --a construction method and device for genetic disease facial recognition model. the construction method of the genetic facial recognition model by using the first face image, using the first loss function training deep learning neural network, obtaining pre-training model, and in response to the network update operation in the pre-training model parameter. obtaining the updated pre-training model, then using small amount of second face image, using the pre-training model after second loss function training update, until the second loss function to the minimum value can be based on the genetic facial recognition model of human face image detecting genetic risk of collection, and the genetic can improve the detection accuracy of the model under the condition of little amount of patient data.--, in abstract, and, --A computer device comprising a memory and a processor, the memory storing a computer program, the steps when the processor executes the computer program to realize the above method. A computer readable storage medium on which a computer program is stored, the step for realizing the above method when the computer program is executed by a processor. said face recognition model construction method for genetic, genetic risk prediction method, a device, a computer device, and a storage medium, by using the first face image, using the first loss function training deep learning neural network. obtaining the pre-training model, and in response to the network update operation in the pre-training model parameter, obtaining the updated pre-training model, then using small amount of second face image, using the pre-training model second updated loss function training, to the minimum value until the second loss function, can be based on the collected human face image detecting genetic risk of genetic disease facial recognition model, and the genetic condition of less amount of patient data and can improve the detection accuracy of the model.--, in pages 3-4 of English version of CN 111368672 A as provided in This Office Action). Re Claim 4, WANG as modified by CHENG, and JIANG further disclose wherein the second convolutional neural network comprises a ResNet-50 in which a last global averaging layer is modified to produce an N- dimensional vector of global features, wherein N is greater than one hundred, such that the one or more global features comprise more than one-hundred global features (see WANG: e.g., --In one embodiment, the deep learning neural network can use Resnet64 (based on 64 layer of the residual network) as the backbone network, and adding SE in each residual network of Resnet64 (Squeeze and Excitation) module to form a SE-Resnet64 network structure as shown in FIG. 2. the structure comprises the orderly connected primary feature extraction layer (7 * 7Conv, + Pooling), 4 SE-Resnet-residual module and fully connected layer (FC) finally. SE-Resnet residual network wherein 4 SE-Resnet residual module orderly comprises 10 layer, 18 layer, 24 layer and 10 layer, in this embodiment, wherein each two layer SE-Resnet residual network and form a SE-Resnet residual block, then 4 of the SE-Resnet residual module orderly comprises 5, 9, 12 and 5 of the SE-Resnet residues. channel number Specifically, respectively corresponding to each residual module is 64, 128, 256 and 512, to realize gradually abstract to high-order features from low-level features, thereby extracting the feature more deeply more abundant. input image firstly to obtain the primary feature map by one convolutional layer (7 * 7Conv), after pooling layer (Pooling) into the residual layer (SE-Resnet) to feature extraction, after pooling layer (Aug Pooling) comprises fully connected layers (FC) outputs detection result (Syndromes). wherein, each SE-Resnet residual block comprises a Resnet + SE, because Squeeze and Excitation (SE block) is a magnitude weight and small occupied network block resource, and by squeeze operation can be the facial features of the image compression, the characteristic channel of each two-dimensional becomes a real, the real feeling of global to some extent with wild, and the output of the dimension and the input feature channel number are matched. Therefore, it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in pages 6-7 of English version of CN 111368672 A as provided in This Office Action). Re Claim 5, WANG as modified by Cheng, and JIANG further disclose wherein aligning the facial image to a template based on the identified plurality of facial landmarks comprises computing a transformation that moves each of the identified plurality of facial landmarks in the three-dimensional facial image to a corresponding position of that facial landmark in the template (see JIANG: e.g., -- a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses for each of a plurality (N) of respective skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective skin sign diagnoses and wherein the CNN is trained using skin sign data for each of the N respective skin signs; and a processing unit coupled to the storage unit configured to receive the image and process the image using the CNN to generate the N respective skin sign diagnoses.--, in [0005]-[0006], and, -- a face and landmark detector to pre-process the image and the processing unit may be configured to generate a normalized image from the image using the face and landmark detector and use the normalized image when using the CNN.--, in [0010], and, -- a facial landmark detector 316, (such as is described in V. Kazemi, J. Sullivan, One millisecond face alignment with an ensemble of regression trees, in: IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1867-1874, incorporated herein by reference) on a source input image 318 to normalize the face (outputting image 202) based on detected landmarks. In this way, an input facial image x is always an upright frontal image of a face at a fixed scale.--, in [0044]; and, --o link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject. The skin diagnosis of the skin signs can be used to drive a further conclusion regarding apparent age such as based on more than 95% of perception from others. In the presence of make-up, the skin diagnosis and further prediction/classification regarding a global appearance or attractiveness may be used to measure effectiveness and establish an impact of foundation, etc. to mask skin aging signs and how lines and structure of the face could be recovered.--, in [0096]-[0087]). Re Claim 6, WANG as modified by CHENG, and JIANG further disclose wherein applying the second convolutional neural network to the aligned three-dimensional facial image to predict the one or more global features comprises: projecting the aligned three-dimensional facial image into a plurality of two-dimensional directional views, wherein each of the plurality of two-dimensional directional views is a view of the three-dimensional facial image from a different angle than the other plurality of two- dimensional directional views (see WANG: e.g., --the genetic risk prediction method, by using a multi-task the convolutional neural network identification to be the human face area in the prediction image data, and detecting the face area of human face confidence degree, so as to extract the face confidence than can satisfy human face area setting value. human face region genetic face recognition model and performing image quality detection on the face region, constructed by the above method of the image quality meets the requirement for genetic and facial recognition, outputting face area risk prediction result on each of the genetic type. accuracy because the output will depend on a layer of calculation, therefore, greatly improves the genetic risk prediction result…..wherein the P-Net is configured as a fully connected network, such as shown in FIG. 7, the input is a 12 * 12 size of picture, so the training needed before the generation of training data (by generating the bonding box, then the bonding box into 12 * picture size of 12) and transformed into 12 * 12 * 3 structure. through 10 convolution kernel of 3 * 3 * 3, 2 * 2 of Max Pooling (stride=2) operation, generating 10 5 * 5 feature of FIG. by 16 3 * 3 * 10 convolution kernel, generating 16 3 * 3 feature of FIG. and generating a characteristic pattern of 32 32 through the 3 3 * 16 * 1*1 convolution kernel.--, in pages 9-10 of English version of CN 111368672 A as provided in This Office Action; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]), and applying the second convolutional neural network to the plurality of two-dimensional directional views to predict the one or more global features (see WANG: e.g., --the genetic risk prediction method, by using a multi-task the convolutional neural network identification to be the human face area in the prediction image data, and detecting the face area of human face confidence degree, so as to extract the face confidence than can satisfy human face area setting value. human face region genetic face recognition model and performing image quality detection on the face region, constructed by the above method of the image quality meets the requirement for genetic and facial recognition, outputting face area risk prediction result on each of the genetic type. accuracy because the output will depend on a layer of calculation, therefore, greatly improves the genetic risk prediction result…..wherein the P-Net is configured as a fully connected network, such as shown in FIG. 7, the input is a 12 * 12 size of picture, so the training needed before the generation of training data (by generating the bonding box, then the bonding box into 12 * picture size of 12) and transformed into 12 * 12 * 3 structure. through 10 convolution kernel of 3 * 3 * 3, 2 * 2 of Max Pooling (stride=2) operation, generating 10 5 * 5 feature of FIG. by 16 3 * 3 * 10 convolution kernel, generating 16 3 * 3 feature of FIG. and generating a characteristic pattern of 32 32 through the 3 3 * 16 * 1*1 convolution kernel…. summarizing, by the convolution for primary feature extraction and setting of the frame, automatically adjusting the size of the window, and performing filtering of the window by non-maximum suppression, the characteristic result input three convolution layer, then judging whether the suggested this area is a human face, and frame return and facial key point detection locator to face part, outputting a plurality of window may face exists through a human face classifier. at last, the human face region input R-Net of primary screen for processing. For R-Net, the structure is a convolutional neural network, as shown in FIG. 8, the network architecture which compared with the P-Net is added with a fully connected layer, return frame and face outline key point of training data is mainly generated by the P-Net, human face area window: an input for further selection and adjustment, high precision filtering window and optimizing the face region.--, in pages 9-10 of English version of CN 111368672 A as provided in This Office Action; and, --In one embodiment, the output face area risk prediction result on each disease type, the method further comprises: according to the face area risk prediction result on each genetic type generating face heat tries and output. Specifically, because the human face heat human face regional facial feature of genetic disease usually characteristic variation part organ, therefore, may be based on the risk prediction result to generate corresponding tries detection of, heat tries large weight in the prediction result corresponding facial feature of the color will more intuitively obvious, so that the output result is more direct.--, in pages 10-11 of English version of CN 111368672 A as provided in This Office Action). Re Claim 7, WANG as modified by CHENG, and JIANG further disclose wherein the plurality of two-dimensional directional views comprises a frontal view of a face in the three-dimensional facial image, one or more views of the face rotated in a leftward direction relative to the frontal view, one or more views of the face rotated in a rightward direction relative to the frontal view, one or more views of the face rotated in an upward direction relative to the frontal view, and one or more views of the face rotated in a downward direction relative to the frontal view (see JIANG: e.g., -- incorporated herein by reference) on a source input image 318 to normalize the face (outputting image 202) based on detected landmarks. In this way, an input facial image x is always an upright frontal image of a face at a fixed scale. --, in [0044]). Re Claim 18, WANG as modified by CHENG, and JIANG further disclose wherein the classification model comprises a multilayer perceptron that outputs a vector of probabilities for a plurality of classifications representing the at least one clinical parameter or medical condition (see JIANG: e.g., -- a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses for each of a plurality (N) of respective skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective skin sign diagnoses and wherein the CNN is trained using skin sign data for each of the N respective skin signs; and a processing unit coupled to the storage unit configured to receive the image and process the image using the CNN to generate the N respective skin sign diagnoses.--, in [0005]-[0006], and, -- a face and landmark detector to pre-process the image and the processing unit may be configured to generate a normalized image from the image using the face and landmark detector and use the normalized image when using the CNN.--, in [0010], and, -- a facial landmark detector 316, (such as is described in V. Kazemi, J. Sullivan, One millisecond face alignment with an ensemble of regression trees, in: IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1867-1874, incorporated herein by reference) on a source input image 318 to normalize the face (outputting image 202) based on detected landmarks. In this way, an input facial image x is always an upright frontal image of a face at a fixed scale.--, in [0044]; and, --o link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject. The skin diagnosis of the skin signs can be used to drive a further conclusion regarding apparent age such as based on more than 95% of perception from others. In the presence of make-up, the skin diagnosis and further prediction/classification regarding a global appearance or attractiveness may be used to measure effectiveness and establish an impact of foundation, etc. to mask skin aging signs and how lines and structure of the face could be recovered.--, in [0096]-[0087]). Claims 8-12, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over WANG as modified by CHENG, and JIANG, and further in view of FU (US 20210232803 A1, claims priority of US-Provisional-Application US 62965164 20200123). Re Claim 8, WANG as modified by CHENG, and JIANG however do not explicitly disclose wherein the one or more views of the face rotated in the leftward direction, the one or more views of the face rotated in the rightward direction, the one or more views of the face rotated in the upward direction, and the one or more views of the face rotated in the downward direction all comprise a plurality of views at fixed intervals of rotation, FU discloses the one or more views of the face rotated in the leftward direction, the one or more views of the face rotated in the rightward direction, the one or more views of the face rotated in the upward direction, and the one or more views of the face rotated in the downward direction all comprise a plurality of views at fixed intervals of rotation (see FU: e.g., --Advances in face rotation, and other face-based generative tasks, have grown more frequent with further advances in the topic of deep learning as a whole…side-views in HR space, which helps a model reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). Furthermore, a three-level loss (i.e., pixel, patch, and global-based) is introduced to learn precise non-linear transformations from LR side-views to HR frontal. Moreover, SRGAN accepts multiple LR profile faces as input, while improving with each sample added. Additional gain is squeezed by adding an orthogonal constraint in the generator to penalize redundant latent representations and, hence, diversify the learned features space. [0049] Face-based generative tasks (e.g., face rotation (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Front View Synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017; Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose-invariant face recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; Yibo Hu, Xiang Wu, Bing Yu, Ran He, and Zhenan Sun. Pose-guided photorealistic face rotation. In CVPR, 2018),--, in [0048]-[0049]), WANG (as modified by CHENG and JIANG) and FU are combinable as they are in the same field of endeavor: automatic facial features analysis from image data with neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG (as modified by CHENG and JIANG)’s method using FU’s teachings by including the one or more views of the face rotated in the leftward direction, the one or more views of the face rotated in the rightward direction, the one or more views of the face rotated in the upward direction, and the one or more views of the face rotated in the downward direction all comprise a plurality of views at fixed intervals of rotation to WANG (as modified by CHENG and JIANG)’s facial features, region detection, facial recognition and facial modeling in order to perform transformation between LR side-view and HR frontal-view faces (see FU: e.g. in [0048]-[0048], [0062] and [0070]-[0074]). Re Claim 9, WANG as modified by CHENG, JIANG and FU further disclose wherein each plurality of views comprises at least three views (see FU: e.g., --Advances in face rotation, and other face-based generative tasks, have grown more frequent with further advances in the topic of deep learning as a whole…side-views in HR space, which helps a model reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). Furthermore, a three-level loss (i.e., pixel, patch, and global-based) is introduced to learn precise non-linear transformations from LR side-views to HR frontal. Moreover, SRGAN accepts multiple LR profile faces as input, while improving with each sample added. Additional gain is squeezed by adding an orthogonal constraint in the generator to penalize redundant latent representations and, hence, diversify the learned features space. [0049] Face-based generative tasks (e.g., face rotation (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Front View Synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017; Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose-invariant face recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; Yibo Hu, Xiang Wu, Bing Yu, Ran He, and Zhenan Sun. Pose-guided photorealistic face rotation. In CVPR, 2018),--, in [0048]-[0049], [0062] and [0070]-[0074]). Re Claim 10, WANG as modified by CHENG, JIANG and FU further disclose wherein applying the facial-omics model to the aligned three-dimensional facial image to predict the one or more local features comprises: segmenting the three-dimensional facial image into a plurality of regions of interest (see JIANG: e.g., --The N skin sign diagnoses and ethnicity vector (or a single value thereof) is presented at 604 such as via a GUI which may also present the image and/or normalized image. Presenting the image may comprise segmenting the image (or normalized image) for each (or at least one) of the N skin signs, indicating which region(s) of face relates to which skin sign. An extract from the image may be made such as using a bounding box and/or mask to isolate a region for which a skin sign diagnosis was prepared for presentation in a GUI. The CNN may be configured to output segmentation related data that may comprise the bounding box and/or mask for each (or at least one) particular region. The image may be annotated such as via augmented reality or virtual reality techniques to highlight the region.--, in [0080]; also see FU: e.g., -- Existing face frontalization methods (Yibo Hu, Xiang Wu, Bing Yu, Ran He, and Zhenan Sun. Pose-guided photorealistic face rotation. In CVPR, 2018. 1; Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving front view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017; Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Manmohan Chandraker. Towards large-pose face frontalization in the wild. In IEEE International Conference on Computer Vision (ICCV), 2017 Peipei Li, Xiang Wu, Yibo Hu, Ran He, and Zhenan Sun. M2fpa: A multi-yaw multi-pitch high-quality database and benchmark for facial pose analysis. 2019) tend to set the generator as an encoder-decoder with skip connections (i.e., U-Net (Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In The Medical Image Computing and Computer Assisted Intervention Society, 2015).--, in [0051], and [0074]; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); and applying the facial-omics model to the plurality of regions of interest to extract local features from each of the plurality of regions of interest (see JIANG: e.g., --[0086] The teaching herein includes an ability to link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject.--, in [0086]-[0087]; also see FU: e.g., --n example embodiment disclosed further below can recover HR images from tiny faces with detailed information, such as fine skin details, clear shapes, and sharp edges, and generates more realistic and identity-preserving faces with better image quality (sharper and clearer details). An example embodiment integrates the auxiliary path 114 (also referred to interchangeably herein as a super-resolution module or side-view SR branch) to provide fine details of non-frontal views (e.g., side-views) in high-resolution space, which helps the apparatus 100 (also referred to interchangeably herein as a model, network, or SRGAN) to reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). An example embodiment introduces a three-level loss (i.e., pixel, patch, and global-based) to learn more precise non-linear transformations from low-resolution side-views to high-resolution frontal views, such as the pixel-level 122, local-level 124, and global-level 126 losses--, in [0032], and, --TP-GAN has two path-ways for frontal face generation to capture local and global features (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving front view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017).--, in [0062]). See the similar obviousness and motivation statements for the combination of teachings of cited references as addressed above for claim 1, and for claim 8. Re Claim 11, WANG as modified by CHEENG, JIANG and FU further disclose wherein the plurality of regions of interest are non- overlapping, and wherein the plurality of regions of interest comprises a corner of right eye, right side of nose, upper right eye, right eye, lower right eye, chin, glabella, forehead, right cheek, philtrum, right temple, nose, mouth, corner of left eye, left side of nose, upper left eye, left eye, lower left eye, left cheek, and left temple (see JIANG: e.g., --[0086] The teaching herein includes an ability to link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject.--, in [0086]-[0087]; also see FU: e.g., --n example embodiment disclosed further below can recover HR images from tiny faces with detailed information, such as fine skin details, clear shapes, and sharp edges, and generates more realistic and identity-preserving faces with better image quality (sharper and clearer details). An example embodiment integrates the auxiliary path 114 (also referred to interchangeably herein as a super-resolution module or side-view SR branch) to provide fine details of non-frontal views (e.g., side-views) in high-resolution space, which helps the apparatus 100 (also referred to interchangeably herein as a model, network, or SRGAN) to reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). An example embodiment introduces a three-level loss (i.e., pixel, patch, and global-based) to learn more precise non-linear transformations from low-resolution side-views to high-resolution frontal views, such as the pixel-level 122, local-level 124, and global-level 126 losses--, in [0032], and, --TP-GAN has two path-ways for frontal face generation to capture local and global features (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving front view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017).--, in [0062]). Re Claim 12, WANG as modified by CHENG, JIANG and FU further disclose wherein segmenting the three-dimensional facial images into a plurality of regions of interest comprises: representing the three-dimensional facial image as a face graph (see JIANG: e.g., --[0086] The teaching herein includes an ability to link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject.--, in [0086]-[0087]; also see FU: e.g., --n example embodiment disclosed further below can recover HR images from tiny faces with detailed information, such as fine skin details, clear shapes, and sharp edges, and generates more realistic and identity-preserving faces with better image quality (sharper and clearer details). An example embodiment integrates the auxiliary path 114 (also referred to interchangeably herein as a super-resolution module or side-view SR branch) to provide fine details of non-frontal views (e.g., side-views) in high-resolution space, which helps the apparatus 100 (also referred to interchangeably herein as a model, network, or SRGAN) to reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). An example embodiment introduces a three-level loss (i.e., pixel, patch, and global-based) to learn more precise non-linear transformations from low-resolution side-views to high-resolution frontal views, such as the pixel-level 122, local-level 124, and global-level 126 losses--, in [0032], and, --TP-GAN has two path-ways for frontal face generation to capture local and global features (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving front view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017).--, in [0062]; also see CHENG: e.g., -- generate a plurality of preliminary feature vectors based on an image associated with a human face, the plurality of preliminary feature vectors comprising a color-based feature vector; obtain at least one input feature vector based on the plurality of preliminary feature vectors; generate a deep feature vector based on the at least one input feature vector using the first sub-neural network--, in abstract, and, -- The analyzing may include analyzing a face in the image or video, which may include face detection, face representation, face identification, expression analysis, physical classification, or the like, or any combination thereof. Information may be obtained based on the analyzing result. [0082] The images or videos to be analyzed may be generated by image analyzing engine 120 from data obtained by imaging device 110, generated directly by imaging device 110, acquired from network 160, or input into image analyzing engine 120 from a computer readable storage media by a user. The images or videos may be two-dimensional or three-dimensional. [0083] Image analyzing engine 120 may perform a preprocessing for the data to be analyzed. The preprocessing may include image dividing, feature extracting, image registration, format converting, cropping, snapshotting, scaling, denoising, rotating, recoloring, subsampling, background elimination, normalization, or the like, or any combination thereof. During the preprocessing procedure, an image 135 focusing on a human face may be obtained from the image or video to be analyzed. Image 135 may be a color image, a grey image, or a binary image. Image 135 may be two-dimensional or three-dimensional.--, in [0081]-0083]); and connecting subsets of the identified plurality of facial landmarks in the face graph into cycles representing the plurality of regions of interest (see JIANG: e.g., --[0086] The teaching herein includes an ability to link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject.--, in [0086]-[0087]; also see FU: e.g., --n example embodiment disclosed further below can recover HR images from tiny faces with detailed information, such as fine skin details, clear shapes, and sharp edges, and generates more realistic and identity-preserving faces with better image quality (sharper and clearer details). An example embodiment integrates the auxiliary path 114 (also referred to interchangeably herein as a super-resolution module or side-view SR branch) to provide fine details of non-frontal views (e.g., side-views) in high-resolution space, which helps the apparatus 100 (also referred to interchangeably herein as a model, network, or SRGAN) to reconstruct high-frequency information of faces (i.e., periocular, nose, and mouth regions). An example embodiment introduces a three-level loss (i.e., pixel, patch, and global-based) to learn more precise non-linear transformations from low-resolution side-views to high-resolution frontal views, such as the pixel-level 122, local-level 124, and global-level 126 losses--, in [0032], --[0049] Face-based generative tasks (e.g., face rotation (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Front View Synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017; Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose-invariant face recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; Yibo Hu, Xiang Wu, Bing Yu, Ran He, and Zhenan Sun. Pose-guided photorealistic face rotation. In CVPR, 2018), hallucination (Yu Chen, Ying Tai, Xiaoming Liu, Chunhua Shen, and Jian Yang. Fsrnet: End-to-end learning face SR with facial priors. In CVPR, 2018; Adrian Bulat and Georgios Tzimiropoulos. Super-fan: Integrated facial landmark localization and SR of real-world low resolution faces in arbitrary poses with GANs. In CVPR, 2018; Yu Yin, Joseph P Robinson, Yulun Zhang, and Yun Fu. Joint super-resolution and alignment of tiny faces. Conference on Artificial Intelligence (AAAI), 2020), and attribute editing (Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018; Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. Attgan: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, 2019) have gained more of the spotlight in research communities based on advancements via deep learning.--, in [0049]; and, --TP-GAN has two path-ways for frontal face generation to capture local and global features (Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving front view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2017).--, in [0062]). Re Claim 14, WANG as modified by CHENG, JIANG and FU further disclose wherein the local features comprise one or both of one or more morphological features or one or more textural features (see WANG: e.g., -- it can use global information extracting capability of SE Block to full utilization of the network global information in the shallow layer, extracting ability so as to improve the network global facial feature of the genetic disease. Compared with the traditional facial recognition, the target of genetic identification not only needs the local feature of the specific individual, it needs more global information of a condition, therefore, can through the SE block of adding help network learning to more weight effective part and less weight of facial features.--, in page 7 of English version of CN 111368672 A as provided in This Office Action; also see: -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see JIANG: e.g., --[0086] The teaching herein includes an ability to link local to global (e.g. specific conditions in a region of the face while processing the entire face) and to have an exhaustive mapping of the face targeting all the key areas - by way of example, wrinkles present in each tier of face from forehead to mouth. [0087] A combination of local skin signs may be used to predict (classify) global appearance (e.g. apparent age, radiance, tiredness, etc.). Appearance may also be determined and compared by performing skin analysis in the presence of make-up. The skin diagnostics herein is sufficiently exhaustive in relation to the nature and position of facial signs to be able to explain the perception when other human beings are looking at the subject. The skin diagnosis of the skin signs can be used to drive a further conclusion regarding apparent age such as based on more than 95% of perception from others. In the presence of make-up, the skin diagnosis and further prediction/classification regarding a global appearance or attractiveness may be used to measure effectiveness and establish an impact of foundation, etc. to mask skin aging signs and how lines and structure of the face could be recovered. [0088] The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc). By measuring these through time, in motion or comparing them to the average of age of the consumer, the method, computing device, etc. may be configured to provide information about acceleration of aging, clear impact of environment (some of signs impact some clusters and not others... ) --, in [0086]-[0088]). Re Claim 15, WANG as modified by, CHENG, and JIANG and FU further disclose wherein the local features comprise a plurality of textural features, and wherein the plurality of textural features comprises kurtosis, skewness, standard deviation, contrast, correlation, uniformity, directionality, homogeneity, coarseness, and directionality (see WANG: e.g., -- in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease…… the network input is X = (H, W, 1024), respectively, by 1 * 1 convolution kernel W θ performing convolution operation reduces the channel number, and calculating to obtain the θ convolutional (H, W, 1024) and after the convolution of (H, W, 1024) output. two output output then process the shape change operation to obtain the (HW, 1024), then both carrying out matrix multiplication (one of which need performing transposition operation) to obtain (HW, HW), and then the softmax operation to obtain the attention channel, these operations then finds the characteristic pattern of each pixel in pixel correlation with all the other position. Similarly to the 1 * 1 g of convolution operation, and then to shape transformation, and then softmax the output of matrix multiplication, to obtain (H, W, 512), namely a channel attention mechanism is applied to each feature map corresponding to position of all channels, intrinsically is weighted average of each position value output are all the other position,--, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see FU: e.g., --Comparing to pixel-level loss, patch-level loss pays more attention to image structures (i.e., the edge and shape of facial components)…. where μ.sub.x, μ.sub.y and σ.sub.x, σ.sub.y corresponds to the mean and standard deviation of x and y, respectively, and σ.sub.xy is the covariance of x and y. Constraints C.sub.1=0.01.sup.2 and C.sub.2=0.03.sup.2 are added for numeric stability--, in [0082]-[0084]). Claims 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over WANG as modified by CHENG, and JIANG, and further in view of Sachs (US 20200302825 A1). Re Claim 16, WANG as modified by CHENG, and JIANG however do not explicitly disclose the at least one clinical parameter or medical condition comprises one or more of the following clinical parameters: age, weight, height, body mass index, smoking use, alcohol consumption, alanine aminotransferase, uric acid, hemoglobin concentrations, glutamyltransferase, hematocrit, and red blood cell volume. Sachs discloses the at least one clinical parameter or medical condition comprises one or more of the following clinical parameters: age, weight, height, body mass index, smoking use, alcohol consumption, alanine aminotransferase, uric acid, hemoglobin concentrations, glutamyltransferase, hematocrit, and red blood cell volume (see Sachs: e.g., --[0193] AI, neural networks and other forms of machine learning can be used to assign a score to musical tracks to build the databases needed to predict what will induce a net parasympathetic nervous system activity in typical humans, or subsets of humans (e.g. age, peer group, demographic, musical taste, location, experience, weight, BMI, medical history, current illness, medication drug or cigarette or alcohol or marijuana use, education, previous exposure). Ongoing presentations and sensing will be used to update the database over time to allow for changes in musical tastes or parasympathetic response characteristics over time in a population.--, in [0193]-[0198]; also see: -- physiological response signals may be based on at least one of: heart rate, heart rate variability, low-frequency heart rate variability spectral power [0.04˜0.15 Hz], high-frequency heart rate variability spectral power [0.15˜0.4 Hz], ratio of low- to high-frequency power, blood pressure, diastolic blood pressure, systolic blood pressure, pulse pressure, blood volume pulse, pulse transit time, pulse wave velocity, blood pressure shape, waveform or pattern, baroreflex sensitivity, baroreceptor response, arterial wall stiffness, vascular elasticity, vascular tone, changes in vascular tone, markers of changes in vascular tone, orthostatic hemodynamic response, respiratory rate, respiratory sinus arrhythmia, respiratory pattern including regularity, depth, frequency, and increases and decreases in these measures over time including abrupt gasps or similar changes in breathing pattern, sympathetic nerve activity, micro-neurography, skin galvanic response, skin conductance, skin conductance level, skin conductance response, galvanic skin resistance, galvanic skin potential, electrodermal response, pilomotor reflex, pilomotor erection or goose bumps, shivering, trembling, pupil diameter, pupillary response, accelerometer or video based measurements of body, eyes and pupillary response and eye or peri-orbital musculature activities, extra-ocular muscle activities, eye movement, eye tracking to monitor fixation location and micro movements, saccade or saccades, eye blink induced by startle, eye blink rate or intensity or duration, startle response, startle reflex, exploratory behaviors, peripheral blood flow, peripheral blood flow changes, flushing, skin blood perfusion, superficial blood flow changes, skin blood perfusion changes, facial expressions (suggesting one or more of happiness, sadness, fear, disgust, anger, or surprise), imaging of facial expression, eye widening, mouth changes, skin temperature, muscle tone, muscle contraction, electromyography (EMG) of musculature including facial, cranial, neck, torso and limb as well as axial musculature, postural changes, head movements, body movements, body sway, body sway changes, hand or forearm shaking or trembling, EMG or mechanomyography (MMG) or accelerometer or imaging or other motion capture methods of limb shaking or trembling, lurching or jumping, startle response, startle reflex, freezing of movement, bradykinesia, bradykinesis, muscle stiffness, changes in posture or movement, speech pattern,..etc.,--, in [0005], [0015], [0125], and [0145]); WANG (as modified by CHENG, and JIANG) and Sachs are combinable as they are in the same field of endeavor: automatic disease prediction using neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG (as modified by CHENG, and JIANG)’s method using Sachs’s teachings by including the at least one clinical parameter or medical condition comprises one or more of the following clinical parameters: age, weight, height, body mass index, smoking use, alcohol consumption, alanine aminotransferase, uric acid, hemoglobin concentrations, glutamyltransferase, hematocrit, and red blood cell volume to WANG (as modified by CHENG, and JIANG)’s disease analysis neural network in order to perform diagnosis and monitoring of medical pathologies including dysfunctions of the reflexive or homeostatic-level control systems (see Sachs: e.g. in abstract, [0005], and [0193]-[0198]). Re Claim 17, WANG as modified by CHENG, and JIANG however do not explicitly disclose the at least one clinical parameter or medical condition comprises one or more of the following medical conditions: obesity, diabetes, metabolic syndrome, hyperuricemia, nonalcoholic fatty liver disease, and anemia, Sachs discloses the at least one clinical parameter or medical condition comprises one or more of the following medical conditions: obesity, diabetes, metabolic syndrome, hyperuricemia, nonalcoholic fatty liver disease, and anemia (see Sachs: e.g., -- many people suffer diseases related to imbalance of the autonomic or sympathetic or parasympathetic nervous systems, including chronic over-activity of the sympathetic nervous system, chronic under-activity of the parasympathetic nervous system, a relative imbalance of sympathetic activity versus parasympathetic activity, or inappropriate reaction or sensitivity of the autonomic nervous system to stressful conditions. The resulting chronic over-activity of the sympathetic nervous system is believed to exacerbate disorders across a variety of physiologic systems including the cardiac, vascular, renal, metabolic, immunologic, endocrine, respiratory, neurologic, and gastrointestinal systems. Clinical disorders exacerbated by relative sympathetic over-activity include hypertension (high blood pressure), heart failure, systolic heart failure, diastolic heart failure, heart failure with preserved ejection fraction, peripheral vascular disease, vascular aneurysm, angina, epilepsy, asthma, pain, rheumatoid arthritis, metabolic syndrome, Type 2 Diabetes, obesity, sleep disorders, irritable bowel syndrome, multiple sclerosis, immunological disorders and allergies, as well as certain psychiatric conditions such as anxiety and panic disorders. For example, individual with chronic pain may benefit from this therapy, including individuals suffering from musculoskeletal pain, neuralgia, arthritis, fibromyalgia, rheumatoid arthritis, inflammatory pain, non-inflammatory pain, nociceptive pain, or neuropathic pain, inflammatory bowel disease, cancer pain, bone pain, migraines, burns, shingles, multiple sclerosis, muscle spasm, etc. The inventors believe that automated, personalized selection or modulation of an auditory stimulus such as music or spoken word or audible book readings or meditations or mindfulness narratives or poetry readings or soothing sounds in real-time based on an individual's physiologic response to the auditory stimulus, may improve these and other chronic conditions.--, in [0200]; also see: -- physiological response signals may be based on at least one of: heart rate, heart rate variability, low-frequency heart rate variability spectral power [0.04˜0.15 Hz], high-frequency heart rate variability spectral power [0.15˜0.4 Hz], ratio of low- to high-frequency power, blood pressure, diastolic blood pressure, systolic blood pressure, pulse pressure, blood volume pulse, pulse transit time, pulse wave velocity, blood pressure shape, waveform or pattern, baroreflex sensitivity, baroreceptor response, arterial wall stiffness, vascular elasticity, vascular tone, changes in vascular tone, markers of changes in vascular tone, orthostatic hemodynamic response, respiratory rate, respiratory sinus arrhythmia, respiratory pattern including regularity, depth, frequency, and increases and decreases in these measures over time including abrupt gasps or similar changes in breathing pattern, sympathetic nerve activity, micro-neurography, skin galvanic response, skin conductance, skin conductance level, skin conductance response, galvanic skin resistance, galvanic skin potential, electrodermal response, pilomotor reflex, pilomotor erection or goose bumps, shivering, trembling, pupil diameter, pupillary response, accelerometer or video based measurements of body, eyes and pupillary response and eye or peri-orbital musculature activities, extra-ocular muscle activities, eye movement, eye tracking to monitor fixation location and micro movements, saccade or saccades, eye blink induced by startle, eye blink rate or intensity or duration, startle response, startle reflex, exploratory behaviors, peripheral blood flow, peripheral blood flow changes, flushing, skin blood perfusion, superficial blood flow changes, skin blood perfusion changes, facial expressions (suggesting one or more of happiness, sadness, fear, disgust, anger, or surprise), imaging of facial expression, eye widening, mouth changes, skin temperature, muscle tone, muscle contraction, electromyography (EMG) of musculature including facial, cranial, neck, torso and limb as well as axial musculature, postural changes, head movements, body movements, body sway, body sway changes, hand or forearm shaking or trembling, EMG or mechanomyography (MMG) or accelerometer or imaging or other motion capture methods of limb shaking or trembling, lurching or jumping, startle response, startle reflex, freezing of movement, bradykinesia, bradykinesis, muscle stiffness, changes in posture or movement, speech pattern,..etc.,--, in [0005], [0015], [0125], and [0145]). And see the similar obviousness and motivation statements for the combination of teachings of cited references as addressed above for claim 16. Re Claim 19, WANG as modified by CHENG, JIANG and Sachs further disclose wherein the classification model comprises a first model for predicting one or more clinical parameters other than age, a second model for predicting age, and a third model for predicting one or more medical conditions (see WANG: e.g., -- For the classification pleoptic is an advantageous characteristic of this information. by parity of reasoning, the whole model will all channel images on prescribed by the above weight distribution self-learning process, and finally will be more accurate for a genetic disease onset facial feature of giving high weight, the facial feature independent of the classification of given low weight, so as to realize extracting to optimization of precisely extracting the facial feature extraction from the global fuzzy, but also improves the accuracy of the model prediction…. Because in the conventional deep neural network model, for extracting facial features is no pertinence of genetic disease, namely performing fuzzy extraction part, but the characteristic of the whole human face facial feature of multiple genetic disease embodied as part of organ characteristic variation. i.e., for a genetic disease, eye, nose, mouth and so on with an independent symptom phenotype does not necessarily exist at the same time, that is to say for a diagnostic and symptom-free if part of the same feature extraction operation, will reduce model for genetic characteristic of the identification ability. Therefore, in order to accurately extracting facial feature model to genetic disease --, in pages 7-8 of English version of CN 111368672 A as provided in This Office Action; also see JIANG: e.g., -- In addition to model validation on image-based scores for selfies data, validation is also performed on a subset of the test subjects for which the dermatologists were able to score the skin condition signs in person. Expert dermatologists received visits from 68 subjects (around 12 experts per subject), and assessed them live, without regard to the subject image-based scores. Similarly to image- based analysis, the mean absolute error was calculated for each skin condition sign, for: 1) the model in system 200, by comparing the prediction from the model to the average experts’ score for the sign for the particular test subject, and 2) for expert in person assessment, by comparing each expert’s score vector to the average experts’ score vector for this subject.--, in [0054], [0063]-[0064], and, -- the deep neural network model may be configured as a depthwise separable convolution neural network comprising convolutions in which individual standard convolutions are factorized into a depthwise convolution and a pointwise convolution. The depthwise convolution is limited to applying a single filter to each input channel and the pointwise convolution is limited to combining outputs of the depthwise convolution.--, in [0075]-[0077], [0080], and [0085]; also see Sachs: e.g., --[0193] AI, neural networks and other forms of machine learning can be used to assign a score to musical tracks to build the databases needed to predict what will induce a net parasympathetic nervous system activity in typical humans, or subsets of humans (e.g. age, peer group, demographic, musical taste, location, experience, weight, BMI, medical history, current illness, medication drug or cigarette or alcohol or marijuana use, education, previous exposure). Ongoing presentations and sensing will be used to update the database over time to allow for changes in musical tastes or parasympathetic response characteristics over time in a population.--, in [0193]-[0198]; also see: -- physiological response signals may be based on at least one of: heart rate, heart rate variability, low-frequency heart rate variability spectral power [0.04˜0.15 Hz], high-frequency heart rate variability spectral power [0.15˜0.4 Hz], ratio of low- to high-frequency power, blood pressure, diastolic blood pressure, systolic blood pressure, pulse pressure, blood volume pulse, pulse transit time, pulse wave velocity, blood pressure shape, waveform or pattern, baroreflex sensitivity, baroreceptor response, arterial wall stiffness, vascular elasticity, vascular tone, changes in vascular tone, markers of changes in vascular tone, orthostatic hemodynamic response, respiratory rate, respiratory sinus arrhythmia, respiratory pattern including regularity, depth, frequency, and increases and decreases in these measures over time including abrupt gasps or similar changes in breathing pattern, sympathetic nerve activity, micro-neurography, skin galvanic response, skin conductance, skin conductance level, skin conductance response, galvanic skin resistance, galvanic skin potential, electrodermal response, pilomotor reflex, pilomotor erection or goose bumps, shivering, trembling, pupil diameter, pupillary response, accelerometer or video based measurements of body, eyes and pupillary response and eye or peri-orbital musculature activities, extra-ocular muscle activities, eye movement, eye tracking to monitor fixation location and micro movements, saccade or saccades, eye blink induced by startle, eye blink rate or intensity or duration, startle response, startle reflex, exploratory behaviors, peripheral blood flow, peripheral blood flow changes, flushing, skin blood perfusion, superficial blood flow changes, skin blood perfusion changes, facial expressions (suggesting one or more of happiness, sadness, fear, disgust, anger, or surprise), imaging of facial expression, eye widening, mouth changes, skin temperature, muscle tone, muscle contraction, electromyography (EMG) of musculature including facial, cranial, neck, torso and limb as well as axial musculature, postural changes, head movements, body movements, body sway, body sway changes, hand or forearm shaking or trembling, EMG or mechanomyography (MMG) or accelerometer or imaging or other motion capture methods of limb shaking or trembling, lurching or jumping, startle response, startle reflex, freezing of movement, bradykinesia, bradykinesis, muscle stiffness, changes in posture or movement, speech pattern,..etc.,--, in [0005], [0015], [0125], and [0145]). And see the similar obviousness and motivation statements for the combination of teachings of cited references as addressed above for claim 16. Claims 13 is rejected under 35 U.S.C. 103 as being unpatentable over WANG as modified by CHENG, JIANG and FU, and further in view of ZHANG (CN 110569795 A). Re Claim 13, WANG as modified by CHENG, JIANG and FU however do not disclose the facial-omics model comprises principal component analysis, ZHANG discloses a facial-omics model comprises principal component analysis (see ZHANG: e.g., -- step S303, the dynamic timing characteristic information of said key point video frame sequence and the static structure of the characteristic information of the target video frame image fusion, fusion to obtain characteristic information; Specifically, the key point video static structure feature information of the video frame image of dynamic timing characteristic information and target frame sequence for fusion to obtain space time characteristic information of the target object in the target video, referred to as the space feature information fusion feature information. process of feature information fusion can be firstly the dynamic timing characteristic information and static structure characteristic information normalized to the same range interval, the two kinds of characteristic information are directly connected, as the fusion characteristic information; it also can be normalized to the same range interval, using principal component analysis (Principal Component Analysis, PCA) dimensionality of the feature information in the higher dimension two feature information is two feature information dimension consistent. also can be the low characteristic information of the dimension raising the same dimensional to two feature information dimension in two kinds of feature information by using the support vector machine (Support Vector Machine, SVM). then using the Gaussian model) or mixed Gaussian model (Gaussian Model (GaussianMixed Model) of the two dimension characteristic information for unified modeling processing, i.e. fusion characteristic information in the characteristic information obtained after processing.--, in page 17 of English version of CN 110569795 A as provided in This Office Action); WANG (as modified by CHENG, JIANG and FU) and ZHANG are combinable as they are in the same field of endeavor: automatic facial features analysis from image data with neural network. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify WANG (as modified by CHENG, JIANG and FU)’s method using ZHANG’s teachings by including a facial-omics model comprises principal component analysis to WANG (as modified by CHENG, JIANG and FU)’s facial features, region detection, facial recognition and facial modeling in order to obtain characteristic information and feature information (see ZHANG: e.g. in page 17 of English version of CN 110569795 A as provided in This Office Action). Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Jun 25, 2025
Non-Final Rejection — §103
Oct 27, 2025
Response Filed
Jan 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month