DETAILED ACTION
This rejection is in response to application filed on 10/16/2025.
Claims 1, 3, 7, and 9-10 are currently pending and have been examined.
Claims 2, 4-6, 8, and 11-16 are cancelled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN 2022116903239 filed on 12/27/2022.
Response to Arguments
Applicant’s arguments, see page 2, filed 10/16/2025, with respect to 35 U.S.C. 112(b) rejection to claims 1-2 and 4-16, 35 U.S.C. 101 (computer program per se) to claim 8, and 35 U.S.C. 101 (signals per se) rejection to claim 10 have been fully considered and are persuasive. The 35 U.S.C. 112(b) rejection to claims 1-2 and 4-16, 35 U.S.C. 101 (computer program per se) to claim 8, and 35 U.S.C. 101 (signals per se) rejection to claim 10 has been withdrawn. However, an outstanding 35 U.S.C. 112(b) rejection is applied to claim 3.
Applicant's arguments filed 10/16/2025 have been fully considered but they are not persuasive.
With respect to applicant’s arguments on page 3 of remarks filed 10/16/2025 that the claims are directed to an improvement to automated recommendation technology because the claims recite specific computer-based operations to obtaining data from a user from social networks, developing and applying algorithms and model, Examiner respectfully disagrees.
In addition, a specific way of achieving a result is not a stand-alone consideration in Step 2A Prong Two. However, the specificity of the claim limitations is relevant to the evaluation of several considerations including the use of a particular machine, particular transformation and whether the limitations are mere instructions to apply an exception. See MPEP 2106.04(d)(I).
The courts have also identified limitations that did not integrate a judicial exception into a practical application: merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) and 2106.04(d)(I).
If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. See MPEP § 2106.05(a)(II).
To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) and 2106.05(a)(II).
A specific way of achieving a result to obtain data and develop and apply mathematical algorithms and models is not a stand-alone consideration in Step 2A Prong Two. Claim 1 merely recites a computer-implemented method and merely includes instructions to implement an abstract idea on a computer.
The disclosure does not provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement to technology by merely using the computer to obtain data as well as apply and develop algorithms and models. Applicant’s specification in paragraphs [0004] and [0007] discuss improving the accuracy of the recommendation which is directed to solving a commercial problem regarding item recommendations rather than solving a problem rooted in technology. Therefore, the claims are not directed to a practical application or recite significantly more by using a computer as a tool to obtain data and apply and develop mathematical models and algorithms.
With respect to applicant’s arguments on page 3-5 of remarks filed 10/16/2025 that Agrawal does not teach that the user personality could or should be inferred though linguistic emotional analysis nor that fuzzy semantic scoring could be used to match personality with clothing design style, Examiner respectfully disagrees.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., user personality could or should be inferred though linguistic emotional analysis nor that fuzzy semantic scoring could be used to match personality with clothing design style) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
With respect to applicant’s arguments on page 3-5 of remarks filed 10/16/2025 that Zadeh does not teach derivation of fashion personality language features from user-authored texts, computation of correlation coefficients between linguistic and psychological traits, and quantifying clothing design semantics using expert-based fuzzy scales, Examiner respectfully disagrees.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., derivation of fashion personality language features from user-authored texts, computation of correlation coefficients between linguistic and psychological traits, and quantifying clothing design semantics using expert-based fuzzy scales) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
With respect to applicant’s arguments on page 5-6 of remarks filed 10/16/2025 that Agrawal and Zadeh does not teach machine-learning personality relationship model and a fuzzy quantification design model are fused to produce autonomous fashion recommendation, Examiner respectfully disagrees.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
With respect to applicant’s arguments on page 6 of remarks filed 10/16/2025 that Agrawal and Zadeh does not teach predicting fashion personality preferences, latent personality inference, and semantic design quantification, Examiner respectfully disagrees.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., predicting fashion personality preferences, latent personality inference, and semantic design quantification) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
With respect to applicant’s arguments on page 6-7 of remarks filed 10/16/2025 that the dependent claims 3 and 7 as well as independent claims 9 and 10 are not taught by the prior art, Examiner respectfully disagrees.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites: selecting, through significant difference analysis and reliability and validity testing, items that have discriminatory power for personality identification to form a fashion personality test scale, rendering said claims indefinite because it is unclear whether the first recitation of a fashion personality test scale in claim 1 is the same or different from the subsequent recitation of a fashion personality test scale in claim 3. Appropriate correction or clarification is required.
The terms “significant difference analysis” and “discriminatory power” in claim 3 are relative terms which renders the claim indefinite. The term “significant difference analysis” and “discriminatory power” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. How is “significant difference analysis” and “discriminatory power” measured? What is “significant difference analysis” and “discriminatory power”? What is the scope? Appropriate correction or clarification is required.
Claim 10 recites: the fashion personality prediction and clothing recommendation method of claim 1, however, claim 1 recites: “[t]he computer-implemented autonomous fashion personality prediction and clothing recommendation method, rendering said claim indefinite because it is unclear whether Claim 10 is referring back to a different or the same method recited in claim 1. Appropriate correction or clarification is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 7, and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more.
Under Step 1 of the Subject Matter Eligibility Test, it must be considered whether the claims are directed to one of the four statutory classes of invention. See MPEP § 2106. In the instant case, claims 1, 3, and 7 are directed to a method, , claim 9 is directed to an electronic device, and claim 10 is directed to a non-transitory computer-readable storage medium which falls within one of the four statutory categories of invention (process/apparatus). Accordingly, the claims will be further analyzed under revised step 2:
Under step 2A (prong 1) of the Subject Matter Eligibility Test, it must be considered whether the claims recite a judicial exception if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. If the claim recites a judicial exception (i.e., an abstract idea), the claim requires further analysis in Prong Two. One of the enumerated groupings of abstract ideas is defined as certain methods of organizing human activity that includes fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. See MPEP § 2106.04(a)(2).
Regarding representative independent claim 1, recites the abstract idea of:
obtaining fashion personality language feature data of a user from social networks;
establishing a fashion personality test scale;
obtaining based on the fashion personality test scale, all proportional preferences of fashion personality types of the user;
developing a relationship model between the fashion personality language feature data and the fashion personality types…;
extracting clothing features, screening clothing samples, quantifying design styles of the clothing samples, and then matching the design styles of the clothing samples;
obtaining user clothing style preference values for the clothing samples; and based on correspondence between the fashion personality types and the user clothing style preference values, as well as correspondence between the matched design styles and design elements, establishing a clothing design model by combining the relationship model between the collected user language feature data and the fashion personality types for prediction and recommendation;
wherein obtaining the fashion personality language feature data from the social networks comprises:
collecting different types of social text used by the user from the social networks;
obtaining a set of personality language feature elements (m) by combining number of words used by the user in the different types of social texts and an emotional dictionary;
obtaining correlation coefficients between the personality language feature elements and the fashion personality types, and filtering out personality language feature elements with strong correlation coefficients; and
using the filtered personality language feature elements as the fashion personality language feature data for developing the relationship mode;
wherein filtering out the personality language feature elements with strong correlation coefficient comprises reserving personality language feature elements with correlation coefficients greater than a first threshold as a key language feature set for fashion personality prediction index, expressed as T={ti, tj,..., tw}(i<j<wE {l,2,3,...,m}) and w (w <n), where m represents the total number of personality language feature elements, w represents number of personality language feature elements with strong correlation with the fashion personality types;
wherein developing the relationship model between the user language feature data and the fashion personality types comprises:
receiving the key language featured set Ti={tl, t2, ..., tw} as input data; and
generating a multidimensional matrix Qi={ql,q2... ,qk} as output data, wherein the multidimensional matrix Qi={ql,q2... ,qk} corresponds to the proportional preferences of the fashion personality types P ={pl, p2, ..., pk} respectively; and
wherein quantifying the design styles of the clothing samples and then matching the design styles of the clothing sample comprises:
using a nine-level semantic scale to score the design styles of the clothing samples;
using, based on scores statistics, triangular fuzzy numbers to characterize the design styles of the clothing samples;
calculating the degree of proximity, expressed as:
PNG
media_image1.png
52
407
media_image1.png
Greyscale
wherein n represents the total number of experts participating in the evaluation, i denotes different clothing samples, and j represents pairs of adjectives for different clothing styles, i denotes different clothing samples, and j represents different design styles, and Aijk1, Aijk2, and Aijk3 represents the triangular fuzzy numbers of scores given by the k1-th, k2-th, and k-3th experts with respect to adjective pair of the j-th design style for the i-th clothing sample;
for each clothing sample, calculating overall utility value of triangular fuzzy number for each design style by:
PNG
media_image2.png
51
348
media_image2.png
Greyscale
,
where
PNG
media_image3.png
23
22
media_image3.png
Greyscale
= (ci, ai, di), represents the triangular fuzzy numbers for the design style, i=1,2, ...,n; UT(AT) represents the overall utility value of a triangular fizzy number and m and I are the upper and lower limits of the triangular fizzy numbers, respectively; and calculating a degree of closeness between the clothing sample and the design style, wherein the degree of closeness is expressed as:
PNG
media_image4.png
50
303
media_image4.png
Greyscale
, where UT(A) represents the overall utility value corresponding to the triangular fizzy number A, UT(B) represents the overall utility value corresponding to the triangular fuzzy number B and ST(AB) denotes the proximity between A and B which represents the degree of closeness between the clothing sample and the design style.
The above-recited limitations amounts to certain methods of organizing human activity as it relates to sales activities and commercial interactions because the claim involves fashion item recommendations and predictions based on obtaining fashion personality language feature data, establishing and obtaining a fashion personality test scale and preferences, developing a relationship model, extracting clothing features, screening clothing samples, quantifying design style, and then matching the design style of the clothing samples, obtaining the user clothing style preference values for the clothing samples, and establishing a clothing design model. The claims also include mathematical concepts and relationships such as models, matrix, and other calculations. Accordingly, the claim recites an abstract idea. See MPEP § 2106.
The Step 2A (prong 2) of the Subject Matter Eligibility Test, is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. See MPEP § 2106.
In this instance, the claims recite the additional elements such as:
The computer-implemented autonomous fashion personality prediction and clothing recommendation method, comprising: …using machine learning algorithms; (Claim 1);
An electronic device, comprising: memory and processor, wherein the memory is used for storing computer-executable instructions, and the processor is used for executing the computer-executable instructions; wherein the computer-executable instructions, when executed by the processor, implement…(Claim 9);
A non-transitory computer-readable storage medium storing computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, implement… (Claim 10).
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Independent claims and dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. For example, independent claims and dependent claims are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same. See MPEP § 2106.
In Step 2A, several additional elements were identified as additional limitations:
The computer-implemented autonomous fashion personality prediction and clothing recommendation method, comprising: …using machine learning algorithms; (Claim 1);
An electronic device, comprising: memory and processor, wherein the memory is used for storing computer-executable instructions, and the processor is used for executing the computer-executable instructions; wherein the computer-executable instructions, when executed by the processor, implement…(Claim 9);
A non-transitory computer-readable storage medium storing computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, implement… (Claim 10).
These additional limitations, including the limitations in the independent claims and dependent claims, do not amount to an inventive concept because the recitations above do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. In addition, they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 7, and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Agarwal et al. (U.S. Pub. No. 20220051479 A1, hereinafter “Agrawal”) in view of Zadeh et al. (US Pub. No. 20140201126 A1, hereinafter “Zadeh”).
Regarding claim 1
Agrawal discloses a computer-implemented fashion personality prediction and clothing recommendation method, comprising (Agrawal, [0004]: method for apparel design and recommendation based on user sentiment; [0045]: social media and sharing apparel design):
obtaining fashion personality language feature data of a user from social networks (Agrawal, [0009]: obtain social media data (e.g. messages and comments) from user; [0047]: keywords in social media messages);
establishing a fashion personality test scale; obtaining based on the fashion personality test scale, all proportional preferences of fashion personality types of the user (Agrawal, [0008]: determine a user sentiment associated with the apparel design based on high to low score; [0025]: the apparel design may be refined based on user sentiment and public sentiment to match the user's desired apparel design; [0033]: the phrases of the text data may include particular terms that represent visual design elements that contain the information of the user instructions with respect to the type of apparel design requested by the user);
developing a relationship model between the fashion personality language feature data and the fashion personality types machine learning algorithms (Agrawal, [0035]: ML models 138 may be configured to receive the output of the natural language processor 134 (or an output of one of the ML models 138) and to generate an output that indicates the apparel design; [0005]: perform natural language processing (NLP) on the text data to interpret the user instructions from the text data (e.g. apparel descriptions, design elements, other particular useful words or phrases for use in apparel design); [0006] After performing the NLP, the server may generate an apparel design based on the interpreted user instructions from text data by providing the user instruction to ML model to indicate visual design elements based on user’s instruction; [0008]: user sentiment determined from text data; [0033]: the phrases of the text data may include particular terms that represent visual design elements that contain the information of the user instructions with respect to the type of apparel design requested by the user);
extracting clothing features, screening clothing samples, quantifying design styles of the clothing samples, and then matching the design styles of the clothing samples (Agrawal, [0033]: NLP includes named entity recognition (NER) identifies design elements and apparel base/content and sub-categories from selected words and phrases (e.g. types of designs, sketches, logos, image design elements, text design elements, pattern design elements, color design elements, apparel color, apparel type, apparel size) and each word corresponds to a distance value; [0034]: based on recognized named entities from NLP generate apparel design with the same design elements and apparel base/content and subcategories; [0051]: extract from text data; [0026]: apparel includes clothing);
obtaining user clothing style preference values for the clothing samples; (Agrawal, [0062]: user selected type of apparel and one or more user-selected visual design elements; [0043]: detect user emotions based on data received with respect to emotions to apparel design);
and based on correspondence between the fashion personality types and the user clothing style preference values, as well as correspondence between the matched design styles and design elements, establishing a clothing design model by combining the relationship model between the collected user language feature data and the fashion personality types for prediction and recommendation (Agrawal, [0024]: ML model identify visual design elements indicated by the user instructions and to combine the visual design elements with apparel content to generate the apparel design. The user instructions may be provided as text data, natural language processing (NLP) may be performed on the text data to interpret the user instructions; [0025]: generate using neural-style transfer (machine learning based virtual changing room) to display the apparel design when worn by user; [0005]: NLP on the text data (e.g. apparel descriptions, design elements, other particular useful words or phrases for use in apparel design; [0033]: terms in text data include visual design elements such as types of designs, apparel base/content, image design elements, text design elements, pattern design elements, color design elements, apparel color, apparel type, apparel size; [0074]: generate recommendations; [0034]: generate apparel design based on apparel type, apparel size, apparel color, location of one or more visual design elements on the apparel, characteristics of the one or more visual design elements; [0026]: apparel includes clothing);
wherein obtaining the fashion personality language feature data from the social networks comprises: collecting different types of social text used by the user from the social networks (Agrawal, [0009]: obtain social media data (e.g. messages and comments) from user; [0047]: keywords in social media messages);
obtaining a set of personality language feature elements (m) by combining number of words used by the user in the different types of social texts and an emotional dictionary; obtaining correlation coefficients between the personality language feature elements and the fashion personality types, and filtering out personality language feature elements with strong correlation coefficients (Agrawal, [0043]: the emotions may include happiness, sadness, indifference, surprise, disappointment, frustration, or other emotions, and the emotions may each be associated with a score or rating indicating the user sentiment; [0047]: a number of keywords from social media reply messages and comments regarding a sentiment associated with apparel design (e.g., “like,” “love,” “hate,” etc.).; [0048]: a number of replies that satisfies a plurality threshold may be mapped to a score regarding user sentiment associated with the keyword; [0035]: ML models 138 may be configured to receive the output of the natural language processor 134 (or an output of one of the ML models 138) and to generate an output that indicates the apparel design; [0005]: perform natural language processing (NLP) on the text data to interpret the user instructions from the text data; [0006] After performing the NLP, the server may generate an apparel design based on the interpreted user instructions from text data by providing the user instruction to ML model to indicate visual design elements based on user’s instruction; [0008]: user sentiment determined from text data; [0033]: the phrases of the text data may include particular terms that represent visual design elements that contain the information of the user instructions with respect to the type of apparel design requested by the user; [0056]: refine the apparel design process until generation of an apparel design associated with a user sentiment score or rating that satisfies a threshold with higher user sentiment scores or ratings than previous apparel designs); and
using the filtered personality language feature elements as the fashion personality language feature data for developing the relationship mode; wherein filtering out the personality language feature elements with strong correlation coefficient comprises reserving personality language feature elements with correlation coefficients greater than a first threshold as a key language feature set for fashion personality prediction index… of personality language feature elements… of personality language feature elements with strong correlation with the fashion personality types (Agrawal, [0069]: determine the public sentiment for the apparel design based on analyzing the social media message; [0070]: assign rating to each reaction to social media message and determine a sentiment score of 2 if the number of likes satisfies a first threshold, a sentiment score of 1 if the number of likes satisfies a second threshold but fails to satisfy the first threshold, a sentiment score of 0.5 if the number of likes satisfies a third threshold but fails to satisfy the second threshold, and a sentiment score of 0 if the number of likes fails to satisfy the third threshold; [0071]: determine the total sentiment score (e.g., a feedback score) based on a weighted sum of the various sentiment scores for user sentiments and for the social media network),
wherein developing the relationship model between the user language feature data and the fashion personality types comprises: receiving the key language featured set … as input data; and generating a multidimensional matrix… as output data, wherein the multidimensional matrix… corresponds to the proportional preferences of the fashion personality types (Agrawal, [0049]: generate vectors that include historical purchases and user sentiment distributions for each user, group users having similar vector value patterns into groups, and generate a matrix of user and vector values for the groups and after the group is identified, the recommendation engine 148 may be configured to perform operations based on user profiles for other members of the group, such as providing recommendations for apparel associated with historical purchases of the other group members, adding additional visual design elements or apparel characteristics)
and wherein quantifying the design styles of the clothing samples and then matching the design styles of the clothing sample comprises (Agrawal, [0033]: NLP includes named entity recognition (NER) identifies design elements and apparel base/content and sub-categories from selected words and phrases that correspond to distance values and context sensitive and context independent identification; [0034]: based on recognized named entities from NLP generate apparel design with the same design elements and apparel base/content and subcategories; [0051]: extract from text data; [0026]: apparel includes clothing):
Agrawal does not teach:
expressed as T={ti, tj,..., tw}(i<j<wE {l,2,3,...,m}) and w (w <n), where m represents the total number…, w represents number…;…Ti={tl, t2, ..., tw}…; and …Qi={ql,q2... ,qk}…, …Qi={ql,q2... ,qk}… P ={pl, p2, ..., pk} respectively;
using a nine-level semantic scale to score the design styles of the clothing samples; using, based on scores statistics, triangular fuzzy numbers to characterize the design styles of the clothing samples;
calculating the degree of proximity, expressed as:
PNG
media_image1.png
52
407
media_image1.png
Greyscale
wherein n represents the total number of experts participating in the evaluation, i denotes different clothing samples, and j represents pairs of adjectives for different clothing styles, i denotes different clothing samples, and j represents different design styles, and Aijk1, Aijk2, and Aijk3 represents the triangular fuzzy numbers of scores given by the k1-th, k2-th, and k-3th experts with respect to adjective pair of the j-th design style for the i-th clothing sample;
for each clothing sample, calculating overall utility value of triangular fuzzy number for each design style by:
PNG
media_image2.png
51
348
media_image2.png
Greyscale
,
where
PNG
media_image3.png
23
22
media_image3.png
Greyscale
= (ci, ai, di), represents the triangular fuzzy numbers for the design style, i=1,2, ...,n; UT(AT) represents the overall utility value of a triangular fizzy number and m and I are the upper and lower limits of the triangular fizzy numbers, respectively; and calculating a degree of closeness between the clothing sample and the design style, wherein the degree of closeness is expressed as:
PNG
media_image4.png
50
303
media_image4.png
Greyscale
,
where UT(A) represents the overall utility value corresponding to the triangular fizzy number A, UT(B) represents the overall utility value corresponding to the triangular fuzzy number B and ST(AB) denotes the proximity between A and B which represents the degree of closeness between the clothing sample and the design style.
However Zadeh teaches:
expressed as T={ti, tj,..., tw}(i<j<wE {l,2,3,...,m}) and w (w <n), where m represents the total number…, w represents number…;…Ti={tl, t2, ..., tw}…; and …Qi={ql,q2... ,qk}…, …Qi={ql,q2... ,qk}… P ={pl, p2, ..., pk} respectively (Zadeh, a feature set {k1, ... , k,,}; [1621]: MxM matrix for the representation of all the relations and an assigned weight, w1, for j=l, ... , N; [2377] Wherein i=l, 2, ... , N); [1954]: some or all of the N shapes sub-divided into Q1 to Qn shapes such as (Q1 +Q2+ ... +Qn);
using a nine-level semantic scale to score the design styles of the clothing samples; using, based on scores statistics, triangular fuzzy numbers to characterize the design styles of the clothing samples (Zadeh, [2658]: terms are analyzed by a semantic/relation/reasoning engine for clothing based on attributes such as style, color, or a person (e.g., fashion model, brand, or a person, or a combination of matching criteria; [2418]: semantic structure with multiple levels; [1565]: values of attributes includes one or more of fuzzy numbers);
calculating the degree of proximity, expressed as:
PNG
media_image1.png
52
407
media_image1.png
Greyscale
, wherein n represents the total number of experts participating in the evaluation, i denotes different clothing samples, and j represents pairs of adjectives for different clothing styles, i denotes different clothing samples, and j represents different design styles, and Aijk1, Aijk2, and Aijk3 represents the triangular fuzzy numbers of scores given by the k1-th, k2-th, and k-3th experts with respect to adjective pair of the j-th design style for the i-th clothing sample (Zadeh, [2162]: all methods can apply expertise factor; [2305]: expertise factor using people to tag or give opinion on the test samples, to show the bias or expertise; [1565]: values of attribute includes one or more of fuzzy numbers; [2581]: fashion dresses and styles [2658]: for clothing and dress, and various attributes are determined such as style and color; [1448]: an attribute is also a fuzzy parameter [1449]: degree of distance written as some dimensionless number (e.g. C=K1/D); [3226]: attributes can be targeted and learned from different styles);
for each clothing sample, calculating overall utility value of triangular fuzzy number for each design style by:
PNG
media_image2.png
51
348
media_image2.png
Greyscale
, where
PNG
media_image3.png
23
22
media_image3.png
Greyscale
= (ci, ai, di), represents the triangular fuzzy numbers for the design style, i=1,2, ...,n; UT(AT) represents the overall utility value of a triangular fuzzy number and m and I are the upper and lower limits of the triangular fuzzy numbers, respectively (Zadeh, [2468]: probability function in fuzzy set; [1565]: values of attribute includes one or more of fuzzy numbers; [2658]: for clothing and dress, and various attributes are determined such as style and color; [2162]: all methods can apply expertise factor; [2305]: expertise factor using people to tag or give opinion on the test samples, to show the bias or expertise; [0945]: upper and lower probabilities);
and calculating a degree of closeness between the clothing sample and the design style, wherein the degree of closeness is expressed as:
PNG
media_image4.png
50
303
media_image4.png
Greyscale
, where UT(A) represents the overall utility value corresponding to the triangular fizzy number A, UT(B) represents the overall utility value corresponding to the triangular fuzzy number B and ST(AB) denotes the proximity between A and B which represents the degree of closeness between the clothing sample and the design style (Zadeh, [1572]: similarity measure between A and Aα based on fuzzy set; [1385]: looking for degree of similarity, e.g. as a fuzzy parameter for clothing; [2468]: probability function in fuzzy set; [1565]: values of attributes one or more of fuzzy numbers; [2658]: for clothing and dress, and various attributes are determined such as style and color).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the quantifying the design styles of Agrawal with calculations of scores, fuzzy numbers, and degrees of closeness as taught by Zadeh because the results of such a modification would be predictable. Specifically, Agrawal would continue to teach quantifying the design styles except not calculations of scores, fuzzy numbers, and degrees of closeness is taught according to the teachings of Zadeh in order to provide reliable information and computations. This is a predictable result of the combination. (Zadeh, [0094-0095]).
Regarding claim 3
The combination of Agrawal and Zadeh teaches the computer-implemented autonomous fashion personality prediction and clothing recommendation method according to Claim 1,
wherein establishing the fashion personality test scale comprises: collecting a large number of daily product images related to fashion, (Agrawal, [0037]: access and obtain images of apparel; [0039]: generate the visual data based at least in part on image data; [0078]: extract information from the input image and constructing a text embedding for visual attribute related to an overall objective of a text classifier);
generating personality-related vocabularies based on the daily product images; after screening the personality-related vocabularies, classifying the personality-related vocabularies by…, wherein each type corresponds to a fashion personality type, represented by p1, p2,... pk (Agrawal, [0008]: determine the user sentiment associated with the apparel design based on the image data such as the emotional state of the user associated with the apparel design; [0067]: detect user emotions from image data; [0043]: the emotions may include happiness, sadness, indifference, surprise, disappointment, frustration, or other emotions; [0064]: generate reward values or reward probabilities based on the user sentiment; [0072]: a model having a combination of actions (A) and rewards (R) and that performs the steps of: (1) determining K possibilities with reward probabilities, {P1, . . . , PK} and sentiment/feedback scores may be assigned reward probabilities; [0032]: word vectorization, dense vectors of words may be mapped to a lower-dimensional vector space. Vectorization may capture the semantics of the words or phrases by placing semantically similar words or phrases close to each other inside an embedding space);
selecting, through significant difference analysis and reliability and validity testing, items that have discriminatory power for personality identification to form a fashion personality test scale; obtaining, based on the fashion personality test scale, all the proportional preferences of the fashion personality types of the users, comprises: obtaining a set of scores by scoring each personality type using the fashion personality test scale (Agrawal, [0066]: may select one or more controls to change the virtual changing room, such as approving a selected apparel design, rejecting a selected apparel design, selecting a previously displayed apparel design, selecting a next apparel design, updating apparel designs; [0067]: map the selected controls for the virtual changing room to user emotions and generate rating or score; [0053]: generate user sentiment scores or ratings based on the detected emotions; [0048]: determine a public sentiment score or rating associated with the apparel design by aggregating the sentiment scores);
and …the scores of each personality type …to obtain the proportional preferences of fashion personality types P={pl,p2,...pk} (Agrawal, [0008]: determine the user sentiment associated with the apparel design based on the image data such as the emotional state of the user associated with the apparel design; [0067]: detect user emotions from image data; [0043]: the emotions may include happiness, sadness, indifference, surprise, disappointment, frustration, or other emotions; [0064]: generate reward values or reward probabilities based on the user sentiment; [0072]: a model having a combination of actions (A) and rewards (R) and that performs the steps of: (1) determining K possibilities with reward probabilities, {P1, . . . , PK} and sentiment/feedback scores may be assigned reward probabilities).
However, Zadeh teaches:
…by clustering algorithm combined with semantics…(Zadeh, [0095]: clustering, NLP, semantic);
designing a series of fashion personality assessment questions based on text and images to explore specific behaviors and psychological expressions of users with different fashion personalities in fashion-related activities (Zadeh, [2142]: the emotion is related to the character of the person, mood, intention, future action, state of mind, or psychology; [1214]: Each relevant question can in turn refer to another relevant question or information, as a cascade to suggest more questions and information for the user; [2821] Find emotions, actions, context, fashion, through images and text; [2937] The teachings here have applications e.g. for fashion e.g. clothing search; [2861] Interactive with user (e.g. put forth photos or parts or segments, and ask NL (natural language) questions); [1354]: one gets to the answer(s) by following multiple paths, starting from the question template);
and comparing the scores … with the number of questions … (Zadeh, [2645]: adds all the scores for comparisons together for all parameters; [0786]: all the parameters above (e.g. the degree of the helpfulness) [3243]: number of inquiries; [0781] The degree of "helpfulness of a statement" is based on question asked; [1314] Combining all the questions above to find all or most relevant questions with relevance scoring).
The motivation to combine Agrawal and Zadeh is the same as set forth above in claim 1.
Regarding claim 7
The combination of Agrawal and Zadeh teaches the computer-implemented autonomous fashion personality prediction and clothing recommendation method according to Claim 1,
wherein the design elements are, based on a clothing style, a set of design elements that can form complete clothing, and various sets of design elements are classified and numbered (Agrawal, [0086]: machine learning (ML) model configured to indicate one or more visual apparel design elements based on the input and to determine the apparel design based on the one or more visual apparel design elements);
the clothing design model is configured to: receive the multidimensional matrix…as input, comprising various design elements …of a set of design elements …(Agrawal, [0082]: neural style transfer network trained on apparel and visual design elements by training functions that generate a gram/style matrix formation, defining the style cost function, assigning style weights to optimize apparel design image generation; [0006]: neural style transfer to combine the visual design elements and apparel content to generate the apparel design; [0037]: training data includes cost function that is received by ML model).
Agrawal does not teach:
Qi={ql,q2... ,qk}…; and a multidimensional dataset as output… S={Si,Sj,...Sv}, wherein, Sk (k=i, j,...,v) represents classification number… of class k.
However, Zadeh teaches:
Qi={ql,q2... ,qk}…; and a multidimensional dataset as output… S={Si,Sj,...Sv}, wherein, Sk (k=i, j,...,v) represents classification number… of class k (Zadeh, a feature set {k1, ... , k,,}; [1621]: MxM matrix for the representation of all the relations and an assigned weight, w1, for j=l, ... , N; [2377] Wherein i=l, 2, ... , N); [1954]: some or all of the N shapes sub-divided into Q1 to Qn shapes such as (Q1 +Q2+ ... +Qn).
The motivation to combine Agrawal and Zadeh is the same as set forth above in claim 1.
Regarding claim 9
The combination of Agrawal and Zadeh teaches an electronic device, comprising: memory and processor, wherein the memory is used for storing computer-executable instructions: wherein the computer-executable instruction, when executed by the processor, implement any of steps for the computer-implemented autonomous fashion personality prediction and clothing recommendation method of claim 1 (Agrawal, [0014]: user device, processor, and memory; [0008]: determine a user sentiment associated with the apparel design based on high to low score; [0025]: the apparel design may be refined based on user sentiment and public sentiment to match the user's desired apparel design; [0033]: the phrases of the text data may include particular terms that represent visual design elements that contain the information of the user instructions with respect to the type of apparel design requested by the user).
Regarding claim 10
The combination of Agrawal and Zadeh teaches a non-transitory computer-readable storage medium storing computer-executable instructions, wherein the computer-executable instructions, when executed by a processor, implement any of steps in the fashion personality prediction and clothing recommendation method of claim 1(Agrawal, [0008]: determine a user sentiment associated with the apparel design based on high to low score; [0025]: the apparel design may be refined based on user sentiment and public sentiment to match the user's desired apparel design; [0033]: the phrases of the text data may include particular terms that represent visual design elements that contain the information of the user instructions with respect to the type of apparel design requested by the user; [0109]: computer readable medium).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is cited as Chen et al. (US Pub. No. 20200320769 A1, hereinafter “Chen”) related to predicting garment or accessory attributes using deep learning techniques, Joung et al. (KR 20220099753 A) related to combining information on the fashion style desired by consumers with the current fashion trends, and non-patent literature, “Using supervised learning to classify clothing brand styles,” related to machine learning techniques to search for fashion products.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LATASHA DEVI RAMPHAL whose telephone number is (571)272-2644. The examiner can normally be reached 11 AM - 7:30 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey A. Smith can be reached at 5712726763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LATASHA D RAMPHAL/Examiner, Art Unit 3688
/Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688