Prosecution Insights
Last updated: April 19, 2026
Application No. 18/700,448

SKIN DIAGNOSIS SYSTEM AND METHOD BASED ON IMAGE ANALYSIS USING DEEP LEARNING

Non-Final OA §103§112
Filed
Apr 11, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Amorepacific Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT PCT/KR2022/015586. Priority to KR10-2021-0136568 with a priority date of 14 October 2021 and KR10-2022-0131817 with a priority date of 13 October 2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDSs dated 11 April 2024, 27 August 2025 and 29 August 2025 have been considered and placed in the application file. Specification - Abstract Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should not contain legal language such as comprising. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. The sheet or sheets presenting the abstract may not include other parts of the application or other material. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. Specification - Drawings The drawings are objected to because the blocks pertaining to elements shown in FIG. 3-14 do not have descriptive labels in conformance with 37 CFR 1.84(n) and 1.84(o), or numbering that is further described in the specification. For example, a descriptive label of "neural network" or "apparatus" or numbering should be inserted into FIG. 3 to describe the box that surrounds the identification steps. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification - Drawings Acknowledgement is made of the color drawings submitted 11 April 2024 in this application. Applicants are reminded that, absent a successful petition, the black and white drawings submitted on 11 April 2024 will be used. No petition is currently on file. Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claims 1, 3, 5-10, 13 and 16 recite “at least one of.” Since “at least one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an imaging unit that acquires” in claim 16; “a face detection model that derives” in claim 16; and “a de-identification model that de-identifies” in claim 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim(s) 5 and 13 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. According to MPEP 2143.03 (I), “If a claim is subject to more than one interpretation, at least one of which would render the claim unpatentable over the prior art, the examiner should reject the claim as indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph (see MPEP § 2175) and should reject the claim over the prior art based on the interpretation of the claim that renders the prior art applicable. (Ex parte Ionescu, 222 USPQ 537 (Bd. Pat. App. & Inter. 1984)” Claim 5 is indefinite because the lack of any preamble is confusing, particularly the relationship between claim 5 and the independent or dependent claims. Claim 13 is indefinite because the “above items” is confusing, particularly the relationship between the items in claim 13 and the items in claims 1-12. Furthermore, there is no grounding in the specification for these terms and the claim language itself does not adequately define these terms. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 3-16 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0123037 A1, (Jiang et al.) in view of US Patent Publication 2022 0335252 A1, (Georgievskaya et al.). Claim 1 [AltContent: textbox (Jiang et al. Fig. 1, showing portions of images used for diagnosis.)] PNG media_image1.png 562 461 media_image1.png Greyscale Regarding Claim 1, Jiang et al. teach a skin diagnosis method based on image analysis using deep learning performed by a processor ("skin diagnostics such as for dermatology and to skin treatment monitoring and more particularly to a system and method for automatic image-based skin diagnostics using deep learning," paragraph [0002]), the method comprising the steps of: acquiring a face image of a subject by photographing a target skin ("an image acquisition function to receive the image," paragraph [0014]); deriving shape or location information of a facial structure by recognizing feature points capable of identifying an individual in the acquired face image ("segmenting the image ( or normalized image) for each ( or at least one) of the N skin signs, indicating which region( s) of face relates to which skin sign. An extract from the image may be made such as using a bounding box and/or mask to isolate a region for which a skin sign diagnosis was prepared for presentation in a GUI," paragraph [0095]); and visualizing and providing skin diagnosis results for items corresponding to artificial neural network models and symptom locations for each item by inputting the de-identified face images into a plurality of the artificial neural network models, respectively ("The skin sign diagnosis for the region may be displayed. Colour may be used to indicate a severity that is proportional to the skin sign diagnosis such a using a scaling factor," paragraph [0095] and "a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses each of a plurality (N) of respective non-disease skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective non-disease skin sign diagnoses and wherein the CNN is trained using non-disease skin sign data for each of the N respective non-disease skin signs," paragraph [0115]), wherein the items may include at least one of diagnoses for wrinkles, pigmentation, pores, erythema, and aging ("The model is trained results evaluated on two datasets of female images according to the following nine skin signs: [0047] Nasolabial folds; [0048] Glabellar wrinkles; [0049] Forehead wrinkles; [0050] Underneath the eye wrinkles; [0051] Comer of the lips wrinkles; [0052] Ptosis of the lower part of the face; [0053] Cheek sebaceous pores; [0054] Whole face pigmentation; and [0055] Vascular disorders," paragraph [0046-0055]). Jiang et al. do not explicitly teach all of teach de-identifying the face image on the basis of the shape or location information. [AltContent: textbox (Georgievskaya et al. Fig. 8 showing an anonymized face. )] PNG media_image2.png 441 694 media_image2.png Greyscale However, Georgievskaya et al. teach de-identifying the face image on the basis of the shape or location information of the facial structure such that personal information of an analysis target cannot be identified ("preparing a masked facial image by separating one or more areas comprising skin-pixels and one or more areas comprising pixels related to non-skin information," paragraph [0013] where a masked facial image is de-identified). Therefore, taking the teachings of Jiang et al. and Georgievskaya et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Automatic Image-Based Skin Diagnostics Using Deep Learning” as taught by Jiang et al. to use “Method and System for Anonymizing Face Images” as taught by Georgievskaya et al. The suggestion/motivation for doing so would have been that, “Some existing facial image processing techniques provide a methodology for at least one of blurring or pixelization of facial images to anonymize them; however, such techniques may not be utilized for facial skin research purposes as they cause loss of skin pixels located closely to blurred or pixelated regions that contain significant information about facial skin features” as noted by the Georgievskaya et al. disclosure in paragraph [0010], which also motivates combination because the combination would predictably have a higher feeling of security in its use as there is a reasonable expectation that people can be concerned about their face being uploaded to a public computer; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of system claim 16 while noting that the rejection above cites to both device and method disclosures. Claim 16 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 3 Regarding claim 3, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the feature points are at least one of eyebrows, eyes, nose, and lips ("As depicted in FIG. SA and FIG. 8B, an exemplary standardized facial image 802 comprising areas comprising skin-pixels 806 (i.e., forehead, nose, cheeks) and areas comprising pixels related to non-skin information 808 (i.e., eyebrows, eyes, hair) is converted into an anonymized facial image 804 shown in FIG. 8B," paragraph [0081]). Claim 4 Regarding claim 4, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the each of the plurality of artificial neural network models is learned using a plurality of training samples as learning data ("data may be augmented with crops of a different scales (randomly chosen from 0.8 to 1.0) to handle any scale variation even after the landmark Based cropping," paragraph [0045]), and the plurality of training samples include transformations using a data augmentation technique ("data may be augmented with crops of a different scales (randomly chosen from 0.8 to 1.0) to handle any scale variation even after the landmark Based cropping," paragraph [0045]). Claim 5 Regarding claim 5, Jiang et al. teach wherein the data augmentation technique includes at least one of random crop, blur, and flip processing ("data may be augmented with crops of a different scales (randomly chosen from 0.8 to 1.0) to handle any scale variation even after the landmark Based cropping," paragraph [0045]). Claim 6 Regarding claim 6, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the artificial neural network model for diagnosing the wrinkles is learned by adjusting parameters including the total number of a subject with detected wrinkles, an estimate of intensity ( depth) compared to the surrounding undetected area in the subject with detected wrinkles, the total area with detected wrinkles, the length and width of the detected wrinkles ("To find the best set of parameters 0 a loss function is minimized. Experiments were performed with several loss functions," paragraph [0040], and "The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc.)." paragraph [0103]), and outputs analysis results for at least one among the number of wrinkles, intensity (depth) of wrinkles, wrinkle area, wrinkle length, wrinkle width, distribution for each intensity, area, length or width of wrinkles, and wrinkle score ("Thus there is described a deep learning approach to skin diagnostics developed using data of females of different ages and ethnicities including the technical aspects of this approach and the results obtained," paragraph [0030]). Claim 7 Regarding claim 7, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the artificial neural network model for diagnosing the pigmentation is learned by adjusting parameters including the total number of a subject with detected pigmentation, an estimate of intensity compared to the surrounding undetected area in the subject with detected pigmentation, the total area with detected pigmentation ("To find the best set of parameters 0 a loss function is minimized. Experiments were performed with several loss functions," paragraph [0040], and "The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc.)." paragraph [0103]), and outputs analysis results for at least one among the number of pigmentation, intensity of pigmentation, pigmentation area, distribution for each intensity, area, length or width of pigmentation, and pigmentation score ("Thus there is described a deep learning approach to skin diagnostics developed using data of females of different ages and ethnicities including the technical aspects of this approach and the results obtained," paragraph [0030]). Claim 8 Regarding claim 8, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the artificial neural network model for diagnosing the pores is learned by adjusting parameters including the total number of a subject with detected pores, an estimate of intensity ( depth) compared to the surrounding undetected area in the subject with detected pores, the total area with detected pores, pore length and pore width ("To find the best set of parameters 0 a loss function is minimized. Experiments were performed with several loss functions," paragraph [0040], and "The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc.)." paragraph [0103]), and outputs analysis results for at least one among the number of pores, intensity ( depth) of pores, pore size, pore area, pore length, pore width, pore sagging (length to width ratio), distribution for each intensity, area, length, width or sagging of pores, and pore score ("Thus there is described a deep learning approach to skin diagnostics developed using data of females of different ages and ethnicities including the technical aspects of this approach and the results obtained," paragraph [0030]). Claim 9 Regarding claim 9, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the artificial neural network model for diagnosing the erythema is learned by adjusting parameters including the total number of a subject with detected erythema, an estimate of intensity compared to the surrounding undetected area in the subject with detected erythema, the total area with detected erythema, and outputs analysis results for at least one among the number of erythema, intensity of erythema, erythema area, distribution for each intensity or area of erythema, and erythema score ("To find the best set of parameters 0 a loss function is minimized. Experiments were performed with several loss functions," paragraph [0040], and "The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc.)." paragraph [0103] where erythema is a vascular disorder causing redness). Claim 10 Regarding claim 10, Jiang et al. teach the skin diagnosis method according to claim 1, wherein the artificial neural network model for diagnosing the aging predicts age for facial aging or facial skin aging estimated from the face image by inputting at least one of the de-identified face image, the output result of a single artificial neural network model, and the value that integrates the output result of a plurality of artificial neural network models ("The skin diagnosis method and techniques herein measure five clinical clusters of the face (winkles/texture, sagging, pigmentation disorders, vascular disorders, cheek pores) which facilitate data to describe all impacts of the aging process, environmental conditions (solar exposures, chronic urban pollution exposures, etc.) or lifestyles (stress, tiredness, quality of sleep, smoking, alcohol, etc.)," paragraph [0103] where the techniques herein refer to a neural network described in the art). Claim 11 Regarding claim 11, Jiang et al. teach the skin diagnosis method according to claim 1, wherein each of the plurality of artificial neural network models is an encoder-decoder structural model based on U-net model ("a large fully convolutional part resulting in a low resolution but powerful set of CNN features (e.g. in an encoder phase), followed by global max or average pooling and several fully connected layers with a final classification layer (in a decoder phase)," paragraph [0034]). Claim 12 Regarding claim 12, Jiang et al. teach the skin diagnosis method according to claim 1, wherein each of the plurality of artificial neural network models is learned in the form of an imageNet pre-trained weight based on ResNet ("In particular, the ResNet50 (a 50 layer Residual Network from Microsoft Research Asia as described by K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778, incorporated herein in its entirety) and the MobileNet V2 (the second version of the depthwise separable convolutional neural network from Google Inc. as described by M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation, arXiv preprint arXiv: 1801.04381, 13 Jan. 2018 incorporated herein in its entirety) architectures may be adapted," paragraph [0033]). Claim 13 Regarding claim 13, Jiang et al. teach the skin diagnosis method according to claim 1, further comprising the step of evaluating at least one of antioxidant efficacy and whitening efficacy for a specific product based on skin diagnosis results for the above items ("Recommendations in term of cosmetic and/or treatment or prevention products (for example on solar exposures which kind of filters in term of geographical location, some anti-oxidants. desquamation agents, etc.)," paragraph [0104]). Claim 14 Regarding claim 14, Jiang et al. teach the skin diagnosis method according to claim 1, further comprising the step of, before inputting the de-identified face image into the plurality of artificial neural network models, respectively, obtaining information about the subject's skin concerns and lifestyle through a questionnaire, wherein the result of skin diagnose is the one resulting from the subject's skin concerns and lifestyle ("provide skin diagnostic information and receive product I treatment recommendations responsive to a skin diagnosis and/or other information regarding the user e.g. age, gender, etc.," paragraph [0067] where responsive to other information teaches obtaining the information through a questionnaire). Claim 15 Regarding claim 15, Jiang et al. teach the skin diagnosis method according to claim 14, further comprising the step of recommending a specific product tailored to the subject's skin concerns and lifestyle or providing beauty eating habit, based on the result of skin diagnose ("The processing unit may further be configured to generate a product recommendation for at least one of the N respective non-disease skin sign diagnoses such as by using a product recommendation component," paragraph [0115]). Claim 16 Regarding claim 16, Jiang et al. teach a skin diagnosis system based on image analysis using deep learning("skin diagnostics such as for dermatology and to skin treatment monitoring and more particularly to a system and method for automatic image-based skin diagnostics using deep learning," paragraph [0002]), the system comprising: an imaging unit that acquires a face image by photographing a target skin ("an image acquisition function to receive the image," paragraph [0014]); a face detection model that derives shape or location information of a facial structure by recognizing feature points capable of identifying an individual in the acquired face image ("segmenting the image ( or normalized image) for each ( or at least one) of the N skin signs, indicating which region( s) of face relates to which skin sign. An extract from the image may be made such as using a bounding box and/or mask to isolate a region for which a skin sign diagnosis was prepared for presentation in a GUI," paragraph [0095]); and a plurality of artificial neural network models for each of at least one item among diagnoses for wrinkles, pigmentation, pores, erythema, and aging ("The skin sign diagnosis for the region may be displayed. Colour may be used to indicate a severity that is proportional to the skin sign diagnosis such a using a scaling factor," paragraph [0095] and "a convolutional neural network (CNN) configured to classify pixels of an image to determine a plurality (N) of respective skin sign diagnoses each of a plurality (N) of respective non-disease skin signs wherein the CNN comprises a deep neural network for image classification configured to generate the N respective non-disease skin sign diagnoses and wherein the CNN is trained using non-disease skin sign data for each of the N respective non-disease skin signs," paragraph [0115]), wherein the plurality of artificial neural network models receive the de-identified face image as input, and visualize and provide skin diagnosis results for items corresponding to the artificial neural network models and symptom locations for each item ("Thus there is described a deep learning approach to skin diagnostics developed using data of females of different ages and ethnicities including the technical aspects of this approach and the results obtained," paragraph [0030]). Jiang et al. do not explicitly teach all of teach de-identifying the face image on the basis of the shape or location information. However, Georgievskaya et al. teach a de-identification model that de-identifies the face image on the basis of the shape or location information of the facial structure such that personal information of an analysis target cannot be identified ("preparing a masked facial image by separating one or more areas comprising skin-pixels and one or more areas comprising pixels related to non-skin information," paragraph [0013] where a masked facial image is de-identified); Jiang et al. and Georgievskaya et al. are combined as per claim 1. Allowable Subject Matter Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2024 0005432 A1 to Aman et al. discloses governing a person's access to a premise or gathering based at least in part upon an anonymous authenticated health status. The person uses a personal computing device such as a smartphone operating an "honest broker" intermediary app to register one or more personal biometrics that remain private to the app. The app communicates with authenticating health measurement devices to determine health measurements regarding the person. When communicating with a health device during measurement, the app confirms the identity of the person by capturing new biometrics for comparison with the registered biometrics. Non Patent Publication “Impact of Deep Learning and Smartphone Technologies in Dermatology: Automated Diagnosis” to Goceri et al. discloses that automated methods can provide objective, early diagnosis and remote monitoring of chronic skin diseases. Moreover, they can be helpful for dermatologists to make decisions. In addition, these systems provide efficiency to reduce cost and time required for diagnosis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 6 February 2026
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month