Prosecution Insights
Last updated: April 19, 2026
Application No. 18/574,255

A METHOD OF DEMONSTRATING AN EFFECT OF A COSMETIC PROCEDURE ON A SUBJECT AS WELL AS A CORRESPONDING ANALYSING APPARATUS

Non-Final OA §103§112
Filed
Dec 26, 2023
Examiner
SOFRONIOU, MICHAEL MARIO
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Symae Technologies Holding B V
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Typographic Conventions Throughout this office action, shorthand notation for referencing locations of elements in documents are utilized. The following is a brief summary of the shorthand utilized: Sec. – is used to denote an associated section with a header in non-patent literature ¶ – is used to denote the number and location of a paragraph col. – is used to denote a column number ln. – is used to denote a line; if a line number is not demarcated in a document, the line number will be assumed to start at 1 for each paragraph. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/2 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: On pg. 3; ln. 2 – the specification recites “are of the subject”. The examiner believes this was meant to recite “area of the subject”. On pg. 3; ln. 6 – the specification recites “wherein for each photo a different lighting setting is used.” The examiner believes this was mean to recite “wherein for each photo, a different lighting setting is used”. On pg. 3; ln. 24 – the specification recites a “colour calue”. The examiner believes this was meant to recite “colour value”. On pg. 9; ln. 3 – the specification recites “fore head”. The examiner believes this was meant to recite “forehead”. On pg. 9; ln. 19-20 – the specification recites “the photos are used to curvature information”. The examiner believes this was meant to recite “the photos are used to generate/model/create curvature information”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 25, 26 & 29 recite limitations that use words like “means” (or “step”) or similar terms with functional language and invoke 35 U.S.C. § 112(f): Claim 25 – first recites the limitation, “lighting equipment configured to provide lighting” [ln. 3] Claim 25 – first recites the limitation, “a processing unit configured to create an image” [ln. 8] Claim 25 – first recites the limitation, “a demonstrating unit configured to demonstrate the effect” [ln. 14] Claim 26 – first recites the limitation, “receiving equipment configured to receive the color and shining value” [ln. 2] Claim 29 – first recites the limitation, “measurement equipment configured to provide a three dimensional measurement” [ln. 2] Because these claim limitations are being interpreted under 35 U.S.C § 112(f), or pre-AIA 35 U.S.C. § 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equipment thereof: “lighting equipment” (Fig. 1a & c; element 7 [pg. 7; ln. 25-30]) – while lighting equipment is not explicitly recited, there is a recitation of “various light sources 7”, which is described as being an arrangement of LED-based light sources which may be used in tandem with reflectors 9, and mirrors 11. “processing unit” (Fig. 2 & 3; element 29 [pg. 9; 7-13 | pg. 10; 31-33 | pg. 11; 1-12]) – the processing unit is described as being arranged for creating an imaging having three-dimensional based meta data, which is further defined as processor 29 which communicates with the camera of the tablet computer 21 to obtain an image of the face in the correct orientation. “demonstrating unit” (Fig. 1 & 2; elements 35, 37, and 39 [pg. 10; ln. 27-30]) – while a demonstrating unit is not explicitly recited, displays 35, 37, and 39 are recited to provide the image with 3D meta data by differentiating in light provided by the virtual environment. “receiving equipment” – the specification lacks sufficient disclosure of the associated structure. “measurement equipment” – the specification lacks sufficient disclosure of the associated structure. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 26 & 29 are rejected under 35 U.S.C. § 112(b) or 35 U.S.C. § 112 (pre-AIA ), second paragraph, as being indefinite for failing to particular point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. § 112, the applicant), regards as the invention. Regarding claim 26, applicant claims “receiving equipment configured to receive the color and shining value of the skin of the subject”. The limitation “receiving equipment” invokes 35 U.S.C. § 112(f) or pre-AIA 35 U.S.C. § 112, sixth paragraph, however the written description fails to disclose the corresponding structure, materials, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. From the specification, the applicant describes an analyzing apparatus that may comprise a receiving unit [pg. 4; ln. 6-7], however the applicant fails to disclose a demonstratable structure associated with the aforementioned receiving equipment / receiving unit, with no reference to a corresponding element in the drawings or a recitation of a particular feature that provides the recited function. Therefore, in this instance “receiving equipment” is interpreted as a 112(f) limitation, and the specification fails to disclose a particular structure for the aforementioned “receiving equipment”. Regarding claim 29, applicant claims “measurement equipment configured to provide a three dimensional measurement of the subject using a three dimensional measurement system”. The limitation “measurement equipment” invokes 35 U.S.C. § 112(f) or pre-AIA 35 U.S.C. § 112, sixth paragraph, however the written description fails to disclose the corresponding structure, materials, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. From the specification, the applicant describes equipment arranged for providing a three dimensional measurement of a subject using a three dimensional measurement system [pg. 6; ln. 7-8], however the applicant fails to disclose a demonstratable structure associated with the aforementioned measurement equipment, with no reference to a corresponding element in the drawings or a recitation of a particular feature that provides the recited function. Therefore, in this instance “measurement equipment” is interpreted as a 112(f) limitation, and the specification fails to disclose a particular structure for the aforementioned “measurement equipment”. Therefore, claims 26 & 29 are rendered indefinite and are rejected under 35 U.S.C. § 112(b) or pre-AIA 35 U.S.C. § 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 26 & 29 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 26, as per MPEP § 2181(IV), “A means- (or step-) plus function limitation that is found to be indefinite under 35 U.S.C. § 112(based) on failure of the specification to disclose a corresponding structure, material, or act that performs the entire claimed function also lacks adequate written description” (emphasis added). Furthermore, as per MPEP 2163.03(VI), “such a limitation also lacks an adequate written description as required by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because an indefinite, unbounded functional limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention.” Therefore, since the applicant has not defined any particular structure for the “receiving equipment” in claim 26, the inventor has not provided sufficient disclosure to show possession of the invention. Applicant has not provided any specific definition for the structure that carries out the function disclosed in claim 26. Additionally, the claimed invention as a whole may not be adequately described if the claims require an essential or critical features which is not adequately described in the specification and which is not conventional in the art or known to one of ordinary skill in the art. It appears that these components and/or features are essential and critical features of the applicant’s invention, because without them, the applicant’s invention would not function as described. Therefore, since applicant has not adequately described a particular structure for performing each of the functions, a person skilled in the art at the time the invention was filed would not have recognized that the inventor was in possession of the invention as claimed. Similarly, with respect to claim 29, that recites “measurement equipment”, the applicant has not described any particular structure for carrying out the function of the claimed invention in the specification. The claimed invention as a whole may not be adequately described if the claims require an essential or critical feature which is not adequately described in the specification and which is not conventional in the art or known to one of ordinary skill. Therefore, since it is unclear what particular structure is used in carryout out the function of “measurement equipment” recited in claim 29, a person skilled in the art at the time the invention was filed would not have recognized that the inventor was in possession of the invention as claimed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 17-23, 25-31 are rejected under 35 U.S.C. 103 as being unpatentable over Sweis et al (US 2023/0200907 A1) in view of Chhibber et al (US 2013/0076932 A1). Regarding claim 17, Sweis et al disclose a system for assessing aesthetic outcomes after cosmetic surgery procedures. More specifically, Sweis et al teach A method of demonstrating an effect of a cosmetic procedure on a subject (“after” images of a patient’s face are obtained upon completion of a treatment or procedure and are displayed to the patient [¶0076]), by using an analyzing apparatus (computing platform 100 [¶0061; Fig. 1]), the method comprising the steps of: taking at least three photos of the subject after the cosmetic procedure has been performed on the subject (the system may prompt the practitioner to obtain images of the full face and neck in three views (frontal, 45°, and 90° angles) [¶0103], which can be taken either immediately after the procedure or after an appropriate recovery period, and can be stored in tandem with an associated “before” picture, such as “before” image 702 and “after” image 704 [¶0085; Fig. 7]), and demonstrating, by the analyzing apparatus (computing platform 100 [¶0061; Fig. 1]), the effect of the cosmetic procedure (“after” images of a patient’s face are obtained upon completion of a treatment or procedure and are displayed to the patient [¶0076]) but does not explicitly teach providing a different lighting to the subject for each photo, creating three-dimensional metadata based on a plurality of normals, implementing a model of the skin, or differentiating in light provided in the virtual environment. Chhibber et al, however, is analogous art in the same field of endeavor as the present application and discloses a method for generating a three-dimensional surface skin profile of a subject’s face. More specifically, Chhibber et al teach providing lighting to the subject using a lighting setting (Chhibber et al: a subject is illuminated via a plurality of light sources [¶ 0058-62; Figs. 3A & B]); wherein for each of the at least three photos a different lighting setting is used (Chhibber et al: each light source 208 (comprised of individual light sources 208-1 & 208-2) is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A], with at least one of the light sources configured to emit light of a respective color via an LED [¶0043]); creating, by the analyzing apparatus, an image having three dimensional based meta data (Chhibber et al: camera 204 obtains information about the surfaces of a face that may be used for the construction of normal maps, which are used for rendering three-dimensional models [¶0053 & 0110; Fig. 6B]), calculating a plurality of normals of the subject using the at least three photos (Chhibber et al: incoming ray of light 114 in Fig. 1B is illustrated to reflect of the skin as specularly reflected light 116, whose angle of reflection corresponds to the normal of the skin 103 [¶0027]), thereby obtaining curvature information of the subject (Chhibber et al: incoming ray of light 114 in Fig. 1B is illustrated to reflect of the skin as specularly reflected light 116, whose angle of reflection corresponds to the normal of the skin 103, which when measured, imparts information about the curvature of the skin (in this case, wrinkles or bumps) [¶0027]); and implementing a skin model, wherein the skin model is based on a color and a shining value of a skin of the subject (Chhibber et al: the examiner notes that a “shining value” is being interpreted as the reflectance of light or albedo of the skin: three-dimensional surface profile of the skin 380 is generated corresponding to the subject’s skin surface profile 370 [¶0068; Fig. 3E], color images may be used to detect and classify skin tone / color [¶0115], a ray of light 114 is reflected off the skin 103 as specularly reflected light 116 (which is interpreted to impart a characteristic “shine” of the skin) [¶0027; Fig. 1B], and this specularly reflected light is used to determine a skin surface profile of the subject [¶0095-96]); by providing the image having the three dimensional based meta data in a virtual environment (Chhibber et al: camera 204 obtains information about the surfaces of a face that may be used for the construction of normal maps, which are used for rendering three-dimensional models [¶0053 & 0110; Fig. 6B]), and by differentiating in light provided in the virtual environment to the image having the three dimensional based meta data (Chhibber et al: for each region the skin surface profile is segmented into, an angle and intensity of light reflected from each surface by a plurality of light sources is calculated through steps 614-620 [¶0101-104; Fig. 6B]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 18, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as described above), the method further comprising the step of: receiving, by the analyzing apparatus (Chhibber et al: imaging system 300 [¶0056-57; Fig. 2C]), the color and shining value of the skin of the subject (Chhibber et al: three-dimensional surface profile of the skin 380 is generated corresponding to the subject’s skin surface profile 370 [¶0068; Fig. 3E], color images may be used to detect and classify skin tone / color [¶0115], a ray of light 114 is reflected off the skin 103 as specularly reflected light 116 (which is interpreted to impart a characteristic “shine” of the skin) [¶0027; Fig. 1B], and this specularly reflected light is used to determine a skin surface profile of the subject [¶0095-96]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 19, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), wherein the subject is a face of a human being (Sweis et al: a cartoon of a (human) patient’s face is shown as element 126a [¶0086; Fig. 2]). Regarding claim 20, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), wherein the step of taking at least three photos comprises taking at least six photos of the subject (Sweis et al: the system may prompt the practitioner to obtain images of (i) the full face and neck in in repose three views (frontal, 45°, and 90° angles) and (ii) the full face and neck while smiling in three views (frontal, 45°, and 90° angles) [¶0103]). Regarding claim 21, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), the method further comprising the step of: providing a three dimensional measurement of the subject using a three dimensional measurement system (Chhibber et al: surface profiles 360 (Fig. 3D) and 380 (Fig. 3E) can be analyzed for feature measurements of pore size, wrinkle length and depth, number, density, etc. [¶0113]), and wherein the three dimensional based meta data that is created in the step of creating, is further determined based on the provided three dimensional measurement (Chhibber et al: surface profiles 360 (Fig. 3D) and 380 (Fig. 3E) can be analyzed for feature measurements of pore size, wrinkle length and depth, number, density, etc. [¶0113]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 22, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), wherein the different lighting setting comprises at least one of: a location of a lighting source (Chhibber et al: each light source 208 is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A]); a light direction of the lighting source (Chhibber et al: the surface normal 111 of skin 103 is identified based on a predefined direction of incoming light [¶0027; Fig. 1B]); a number of simultaneously used lighting sources (Chhibber et al: each light source 208 (comprised of individual light sources 208-1 & 208-2) is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A], with each ray of light (114 & 118) from each light source reflecting off the skin 103 and being captured by camera 106 [¶0027; Fig. 1B]); and a color of the lighting source (Chhibber et al: light sources 208 includes a first light source 208-1 of a first color, and a second light source 208-2 of a second [¶0043; Fig. 2A-C]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 23, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), wherein the step of taking at least three photos of the subject comprises: taking the at least three photos of the subject homogenously, such that an orientation of the subject remains constant over the at least three photos (given the disjunctive “or” of the claim language, this limitation will not be mapped to); or taking the at least three photos of the subject heterogeneously, such that an orientation of the subject varies over the at least three photos (Sweis et al: the system may prompt the practitioner to obtain images of the full face and neck in three views (frontal, 45°, and 90° angles) [¶0103]). Regarding claim 25, Sweis et al disclose a system for assessing aesthetic outcomes after cosmetic surgery procedures. More specifically, Sweis et al teach An analyzing apparatus (computing platform 100 [¶0061; Fig. 1]) for demonstrating an effect of a cosmetic procedure on a subject (“after” images of a patient’s face are obtained upon completion of a treatment or procedure and are displayed to the patient [¶0076]), the analyzing apparatus comprising: a camera unit (an imaging sensor such as a camera communicatively coupled to the system 100 [¶0103]) configured to take at least three photos of the subject after the cosmetic procedure has been performed on the subject (the system may prompt the practitioner to obtain images of the full face and neck in three views (frontal, 45°, and 90° angles) [¶0103], which can be taken either immediately after the procedure or after an appropriate recovery period, and can be stored in tandem with an associated “before” picture, such as “before” image 702 and “after” image 704 [¶0085; Fig. 7]), and a demonstrating unit (As noted above in Claim Interpretation, functional claim language that invokes 35 U.S.C. § 112(f) is being mapped to the corresponding structure outlined in the specification of the present application – user interface of a display of system 100 or device 154 displays a prompt for an image of the patient’s face [¶0103]) configured to demonstrate the effect of the cosmetic procedure (“after” images of a patient’s face are obtained upon completion of a treatment or procedure and are displayed to the patient [¶0076]), but does not explicitly teach providing a different lighting to the subject for each photo, creating three-dimensional metadata based on a plurality of normal, implementing a model of the skin, or differentiating in light provided in the virtual environment. Chhibber et al, however, is analogous art in the same field of endeavor as the present application and discloses a method for generating a three-dimensional surface skin profile of a subject’s face. More specifically, Chhibber et al teach lighting equipment (Chhibber et al: light source 160 can be a polarized LED [¶0029; Fig. 1D]) configured to provide lighting to the subject using a lighting setting (Chhibber et al: a subject is illuminated via a plurality of light sources [¶ 0058-62; Figs. 3A & B]); wherein for each of the at least three photos a different lighting setting is used (Chhibber et al: each light source 208 (comprised of individual light sources 208-1 & 208-2) is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A]); a processing unit (Chhibber et al: processor(s) 230 [¶0054; Fig. 2A]) configured to create an image having three dimensional based meta data (Chhibber et al: camera 204 obtains information about the surfaces of a face that may be used for the construction of normal maps, which are used for rendering three-dimensional models [¶0053 & 0110; Fig. 6B]), wherein the three dimensional based meta data is determined by: calculating a plurality of normals of the subject using the at least three photos (Chhibber et al: incoming ray of light 114 in Fig. 1B is illustrated to reflect of the skin as specularly reflected light 116, whose angle of reflection corresponds to the normal of the skin 103 [¶0027]), thereby obtaining curvature information of the subject (Chhibber et al: incoming ray of light 114 in Fig. 1B is illustrated to reflect of the skin as specularly reflected light 116, whose angle of reflection corresponds to the normal of the skin 103, which when measured, imparts information about the curvature of the skin (in this case, wrinkles or bumps) [¶0027]); and implementing a skin model, wherein the skin model is based on a color and a shining value of a skin of the subject (Chhibber et al: three-dimensional surface profile of the skin 380 is generated corresponding to the subject’s skin surface profile 370 [¶0068; Fig. 3E], color images may be used to detect and classify skin tone / color [¶0115], a ray of light 114 is reflected off the skin 103 as specularly reflected light 116 (which is interpreted to impart a characteristic “shine” of the skin) [¶0027; Fig. 1B], and this specularly reflected light is used to determine a skin surface profile of the subject [¶0095-96]); by providing the image having the three dimensional based meta data in a virtual environment (Chhibber et al: camera 204 obtains information about the surfaces of a face that may be used for the construction of normal maps, which are used for rendering three-dimensional models [¶0053 & 0110; Fig. 6B]), and by differentiating in light provided in the virtual environment to the image having the three dimensional based meta data (Chhibber et al: for each region the skin surface profile is segmented into, an angle and intensity of light reflected from each surface by a plurality of light sources is calculated through steps 614-620 [¶0101-104; Fig. 6B]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 26, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as described above)), further comprising: receiving equipment (Chhibber et al: camera 106 or 204 [¶0025 & 0039; Fig. 1A & 2A]) configured to receive the color and shining value of the skin of the subject (Chhibber et al: three-dimensional surface profile of the skin 380 is generated corresponding to the subject’s skin surface profile 370 [¶0068; Fig. 3E], color images may be used to detect and classify skin tone / color [¶0115], a ray of light 114 is reflected off the skin 103 as specularly reflected light 116 (which is interpreted to impart a characteristic “shine” of the skin) [¶0027; Fig. 1B], and this specularly reflected light is used to determine a skin surface profile of the subject [¶0095-96]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 27, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as described previously), wherein the subject is a face of a human being (Sweis et al: a cartoon of a (human) patient’s face is shown as element 126a [¶0086; Fig. 2]). Regarding claim 28, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as previously described), wherein the camera unit (Sweis et al: an imaging sensor such as a camera communicatively coupled to the system 100 [¶0103]) is further configured to take at least six photos of the subject (Sweis et al: the system may prompt the practitioner to obtain images of (i) the full face and neck in in repose three views (frontal, 45°, and 90° angles) and (ii) the full face and neck while smiling in three views (frontal, 45°, and 90° angles) [¶0103]). Regarding claim 29, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 28 (as described above), further comprising: measurement equipment (Chhibber et al: camera 204 including a lens 218 to focus light onto a photodetector 216, with the lens 218 being controlled by control circuitry 214 to enable zooming for accurate measurement of imaged skin features [¶0046; Fig. 2A]) configured to provide a three dimensional measurement of the subject using a three dimensional measurement system (Chhibber et al: surface profiles 360 (Fig. 3D) and 380 (Fig. 3E) can be analyzed for feature measurements of pore size, wrinkle length and depth, number, density, etc. [¶0113]), wherein the three dimensional based meta data that is created by the processing unit, is further determined based on the three dimensional measurement (Chhibber et al: surface profiles 360 (Fig. 3D) and 380 (Fig. 3E) can be analyzed for feature measurements of pore size, wrinkle length and depth, number, density, etc. [¶0113]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 30, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as previously described), wherein the different lighting setting comprises at least one of: a location of a lighting source (Chhibber et al: each light source 208 is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A]); a light direction of the lighting source (Chhibber et al: the surface normal 111 of skin 103 is identified based on a predefined direction of incoming light [¶0027; Fig. 1B]); a number of simultaneously used lighting sources (Chhibber et al: each light source 208 (comprised of individual light sources 208-1 & 208-2) is located to illuminate the subject 202 from a distinct location [¶0042; Fig. 2A], with each ray of light (114 & 118) from each light source reflecting off the skin 103 and being captured by camera 106 [¶0027; Fig. 1B]); and a color of the lighting source (Chhibber et al: light sources 208 includes a first light source 208-1 of a first color, and a second light source 208-2 of a second [¶0043; Fig. 2A-C]). Chhibber et al disclose that their system provides the added advantage of increased speed, quality and accuracy of generating skin surface profiles via illuminating the skin via a plurality of light sources [¶ 0007]. Chhibber et al further disclose the that this system enables accurate detection and measurement of imaged skin features such as pore size, skin spots, wrinkles, or moles via their 3D rendering [¶0046 & 53], features that are often aimed to be changed or minimized via cosmetic procedures. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the features disclosed by Chhibber et al for creating a surface model of the skin and implement them with the base device proposed by Sweis et al for demonstrating the effects of a cosmetic procedure to arrive at the invention of the present application. Regarding claim 31, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as previously described), wherein the camera unit (Sweis et al: an imaging sensor such as a camera communicatively coupled to the system 100 [¶0103]) is further configured to: take the at least three photos of the subject homogenously, such that an orientation of the subject remains constant over the at least three photos (given the disjunctive “or” of the claim language, this limitation will not be mapped to); or take the at least three photos of the subject heterogeneously, such that an orientation of the subject varies over the at least three photos (Sweis et al: the system may prompt the practitioner to obtain images of the full face and neck in three views (frontal, 45°, and 90° angles) [¶0103]). Claims 24 & 32 are rejected under 35 U.S.C. 103 as being unpatentable over -Sweis et al (US 2023/0200907 A1) in view of Chhibber et al (US 2013/0076932 A1), ---further in view of Weyrich et al (US 2006/0227137). Regarding claim 24, Sweis et al in view of Chhibber et al teach The method according to claim 17 (as previously described), however the aforementioned references fail to teach receiving translucency information of the skin. Weyrich et al, on the other hand, is analogous art pertinent to the technological problem addressed in the present application and disclose a model for simulating skin reflectance for rendering subject’s faces. More specifically, Weyrich et al teach the method further comprising the step of: receiving, by the analyzing apparatus (Weyrich et al: fiber optic spectrometer 500 is used to obtain reflectance data of the skin, captured via camera 540 [¶0037; Fig. 5]), translucency information of the skin of the subject (Weyrich et al: using dense interpolation 220, a translucency map 234 is generated [¶0037; Fig. 2]), and wherein the three dimensional based meta data that is created in the step of creating, is further determined based on the received translucency information (Weyrich et al: a 3D geometry of the face is measured via a 3D scanner, which utilizes the translucency map 234, in addition to a basis BRDF 231, texture map 232, and albedo map 233, provide a complete skin reflectance model 250 [0036-39; Fig. 2]). Weyrich et al also disclose that that the translucency map 234 expresses low-frequency absorption and scattering in the dermal layer, while the albedo map 233 expresses high-frequency color variations due to epidermal absorption and scattering [¶0033], which provides the benefit of capturing illumination dynamics at both ends of the frequency spectrum. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take incorporate the translucency map provided by Weyrich et al, with the skin surface model disclosed by Sweis et al in view of Chhibber et al to generate a more complete model of the skin reflectance to arrive at the invention of the present application. Regarding claim 31, Sweis et al in view of Chhibber et al teach The analyzing apparatus according to claim 25 (as previously described), however the aforementioned references fail teach receiving translucency information of the skin. Weyrich et al, on the other hand, is analogous art pertinent to the technological problem addressed in the present application and disclose a model for simulating skin reflectance for rendering subject’s faces. More specifically, Weyrich et al teach further comprising: receiving equipment (Weyrich et al: face-scanner 350, which contains four cameras [¶0048; Fig. 3]) configured to receive translucency information of the skin of the subject (Weyrich et al: fiber optic spectrometer 500 is used to obtain reflectance data of the skin, captured via camera 540 using dense interpolation 220, a translucency map 234 is generated [¶0037; Figs. 2 & 5]), wherein the three dimensional based meta data that is created by the processing unit (Chhibber et al: processor(s) 230 [¶0054; Fig. 2A]), is further determined based on the received translucency information (Weyrich et al: a 3D geometry of the face is measured via a 3D scanner, which utilizes the translucency map 234, in addition to a basis BRDF 231, texture map 232, and albedo map 233, provide a complete skin reflectance model 250 [0036-39; Fig. 2]). Weyrich et al also disclose that that the translucency map 234 expresses low-frequency absorption and scattering in the dermal layer, while the albedo map 233 expresses high-frequency color variations due to epidermal absorption and scattering [¶0033], which provides the benefit of capturing illumination dynamics at both ends of the frequency spectrum. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take incorporate the translucency map provided by Weyrich et al, with the skin surface model disclosed by Sweis et al in view of Chhibber et al to generate a more complete model of the skin reflectance to arrive at the invention of the present application. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: D’Alessandro et al (US 2018/0276883 A1) disclose a method for simulating age and appearance by constructing a 3D model of a human’s face Chen et al (WO 2017/029488 A2) disclose a method for generating 3D models of a human’s face under distinct illumination conditions Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael M. Sofroniou whose telephone number is (571)272-0287. The examiner can normally be reached M-F: 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M. Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M SOFRONIOU/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 26, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month