Prosecution Insights
Last updated: April 19, 2026
Application No. 18/284,635

MACHINE LEARNING METHODS AND SYSTEMS FOR ASSESSING PRODUCT CONCEPTS

Non-Final OA §101§102§103§112
Filed
Sep 28, 2023
Examiner
MEINECKE DIAZ, SUSANNA M
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
AI Palette Pte. Ltd.
OA Round
1 (Non-Final)
31%
Grant Probability
At Risk
1-2
OA Rounds
4y 4m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
211 granted / 689 resolved
-21.4% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
47 currently pending
Career history
736
Total Applications
across all art units

Statute-Specific Performance

§101
34.3%
-5.7% vs TC avg
§103
31.8%
-8.2% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 689 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 2-6, 8-11, and 13-20 are objected to because of the following informalities: Each of claims 2-6, 8-11, and 13-20 appears to be written as a dependent claim; however, instead of referring to the respective independent claim as “The machine learning method according to claim 1” (for claims 2-6), “The method of training a market model according to claim 7” (for claims 8-10), “…the processor to carry out the method according to claim 1” (for claim 11), and “The machine learning system according to claim 17” (for claims 18-20), the word “a” is used in place of “the,” thereby raising questions as to whether or not all of the limitations of the respective independent claim are meant to be read into the respective dependent claim. For examination purposes, the aforementioned phrases will be interpreted with “the” instead of “a”; however, appropriate correction and/or clarification is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 15-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 15 and 17 are both recited as “a machine learning system according to claim 11”; however, claim 11 is a computer readable medium claim. The dependent claims inherit the rejection of the claim(s) from which each depends. For examination purposes, claims 15 and 17 will be interpreted as “The machine learning system according to claim 12”; however, appropriate correction and/or clarification is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claimed invention is directed to “the assessment of product concepts by simulating consumer responses to product concepts” (Spec: p. 1: ll. 6-7) without significantly more. Step Analysis 1: Statutory Category? Yes – The claims fall within at least one of the four categories of patent eligible subject matter. Process (claims 1-10), Apparatus (claims 12-20), Article of Manufacture (claim 11) Independent claims: Step Analysis 2A – Prong 1: Judicial Exception Recited? Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite: [Claim 1] A method for assessing product concepts, the method comprising: receiving a plurality of candidate product concepts, each candidate product concept comprising a natural language name and description of a candidate product; for each candidate product concept, extracting product ingredient data and product theme data, the product ingredient data indicating at least one product ingredient and the product theme data indicating at least one product theme; for each candidate product concept, determining a relevance metric using a market model, the market model comprising a model to provide relevance scores for product themes and product ingredients, and determining an originality metric using a cluster model; ranking the candidate product concepts in a ranking order according to the respective relevance metrics and originality metrics; and generating a ranked list of candidate product concepts according to the ranking order. [Claim 7] A method of creating a market model to provide relevance scores for product themes and product ingredients, the method comprising: monitoring social network posts to collect data; extracting product ingredient data and product theme data from the collected data; creating a market model using the collected data to recognize ingredients and product themes from the collected data; and creating a market model based on the ingredients and the product themes. [Claim 11] carry out a method according to claim 1. [Claim 12] receive a plurality of candidate product concepts, each candidate product concept comprising a natural language name and description of a candidate product; for each candidate product concept, extract product ingredient data and product theme data, the product ingredient data indicating at least one product ingredient and the product theme data indicating at least one product theme; for each candidate product concept, determining a relevance metric using the market model, and determine an originality metric using the cluster model; rank the candidate product concepts in a ranking order according to the respective relevance metrics and originality metrics; and generate a ranked list of candidate product concepts according to the ranking order. It is noted that a simulated agent may be interpreted as a representative of an entity. Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. A human user can gather (receive) data, extract information, create and evaluate a market model, rank candidate concepts, etc. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Aside from the additional elements, the aforementioned claim details exemplify a method of organizing human activity (since the details include examples of commercial or legal interactions, including advertising, marketing or sales activities or behaviors, and/or business relations and managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions). More specifically, the evaluated process is related to “the assessment of product concepts by simulating consumer responses to product concepts” (Spec: p. 1: ll. 6-7), which (under its broadest reasonable interpretation) is an example of marketing activities (i.e., organizing human activity); therefore, aside from the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the limitations identified in the more detailed claim listing above encompass the abstract idea of organizing human activity. 2A – Prong 2: Integrated into a Practical Application? No – The judicial exception(s) is/are not integrated into a practical application. Claim 1 recites that the method for assessing product concepts is a machine learning method. Claim 1 recites that the market model comprises a machine learning model trained to provide relevance scores for product themes and product ingredients, and determining an originality metric using a cluster model. Claim 7 recites a method of training a market model. Claim 7 recites training a recognition engine using the collected data to recognize ingredients and product themes from the collected data; and training a market model based on the ingredients and the product themes. Claim 11 recites a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 1. Claim 12 recites a machine learning system. Claim 12 recites a machine learning system for assessing product concepts, the system comprising: a processor; a data storage device storing: a market model comprising a machine learning model trained to provide relevance scores for product themes and product ingredients; and a cluster model; and a program storage device storing computer program instructions operable to cause the processor to generally perform the recited operations. The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: p. 8: 1 – p. 9: 9). The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations. The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s). The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)). Considering that the implementation of the machine learning model and/or the training of the model is performed using generic processing elements, such an implementation is presented as a generic recitation of machine learning in the claims and as a general link to technology. The machine learning-based processing elements are simply tools to generally automate the underlying process that could be performed by a human. It is further noted that, as described in Applicant’s Specification, the machine learning operations are generic machine learning operations. The Specification presents no assertion that there is any improvement in the automated machine learning process itself. Such a generic recitation of machine learning, as recited in the claims, is little more than automating an analogous process that can be performed by a human. There is no transformation or reduction of a particular article to a different state or thing recited in the claims. Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately. 2B: Claim(s) Provide(s) an Inventive Concept? No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible. Dependent claims: Step Analysis 2A – Prong 1: Judicial Exception Recited? Yes – Aside from the additional elements identified in Step 2A – Prong 2 below, the claims recite: [Claim 2] wherein extracting product ingredient data and product theme data, comprises using an embedding model and a recognition model. [Claim 3] wherein the embedding model comprises mappings from a plurality of languages. [Claim 4] wherein determining a relevance metric using a market model for each candidate product concept comprises: simulating a consumer response to each respective candidate product concept using a plurality of simulated agents which are configured to update the market model and thereby obtain a respective simulated market model for the respective candidate product concept, and determining the respective relevance metric from the respective simulated market model for the candidate product concept. [Claim 5] wherein simulating the consumer response to the candidate product using a plurality of simulated agents comprises simulating interactions between simulated agents over a plurality of simulation cycles, wherein simulated agents of the plurality of simulated agents are configured to be connected to other connected simulated agents of the plurality of simulated agents and thereby influence behavior of the connected simulated agents. [Claim 6] wherein a first simulated agent of the plurality of simulated agents has a higher number of connections than a second simulated agent of the plurality of simulated agents. [Claim 8] wherein monitoring social network posts to collect data comprises implementing a plurality of market sensors. [Claim 9] wherein the collected data comprises text data and image data. [Claim 13] extract product ingredient data and product theme data using an embedding model and a recognition model. [Claim 14] wherein the embedding model comprises mappings from a plurality of languages. [Claim 15] determine a relevance metric using a market model for each candidate product concept by simulating a consumer response to each respective candidate product concept using a plurality of simulated agents which are configured to update the market model and thereby obtain a respective simulated market model for the respective candidate product concept, and determining the respective relevance metric from the respective simulated market model for the candidate product concept. [Claim 16] simulate the consumer response to the candidate product using a plurality of simulated agents by simulating interactions between simulated agents over a plurality of simulation cycles, wherein simulated agents of the plurality of simulated agents are configured to be connected to other connected simulated agents of the plurality of simulated agents and thereby influence behavior of the connected simulated agents. [Claim 17] create the market model to provide relevance scores for product themes and product ingredients by: monitoring social network posts to collect data; extracting product ingredient data and product theme data from the collected data; creating a model using the collected data to recognize ingredients and product themes from the collected data; and creating a market model based on the ingredients and the product themes. [Claim 18] monitor social network posts to collect data by implementing a plurality of market sensors. [Claim 19] wherein the collected data comprises text data and image data. [Claim 20] extract product ingredient and product theme data from the collected data. It is noted that a simulated agent may be interpreted as a representative of an entity. The dependent claims further present details of the abstract ideas identified in regard to the independent claims. Aside from the additional elements, the aforementioned claim details exemplify the abstract idea(s) of a mental process (since the details include concepts performed in the human mind, including an observation, evaluation, judgment, and/or opinion). As explained in MPEP § 2106(a)(2)(C)(III), “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, ‘methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’’ 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).” The limitations reproduced above, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. That is, other than reciting the additional elements identified in Step 2A – Prong 2 below, nothing in the claim elements precludes the steps from practically being performed in the mind and/or by a human using a pen and paper. For example, but for the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the respectively recited steps/functions of the claims, as drafted and set forth above, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and/or with the use of pen and paper. A human user can gather (receive) data, extract information, create and evaluate a market model, rank candidate concepts, simulate scenarios, monitor social network posts to collect data, etc. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind (and/or with pen and paper) but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Aside from the additional elements, the aforementioned claim details exemplify a method of organizing human activity (since the details include examples of commercial or legal interactions, including advertising, marketing or sales activities or behaviors, and/or business relations and managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions). More specifically, the evaluated process is related to “the assessment of product concepts by simulating consumer responses to product concepts” (Spec: p. 1: ll. 6-7), which (under its broadest reasonable interpretation) is an example of marketing activities (i.e., organizing human activity); therefore, aside from the recitations of generic computer and other processing components (identified in Step 2A – Prong 2 below), the limitations identified in the more detailed claim listing above encompass the abstract idea of organizing human activity. 2A – Prong 2: Integrated into a Practical Application? No – The judicial exception(s) is/are not integrated into a practical application. The dependent claims include the additional elements of their independent claims. Claims 1-6 recite that the method for assessing product concepts is a machine learning method. Claim 1 recites that the market model comprises a machine learning model trained to provide relevance scores for product themes and product ingredients, and determining an originality metric using a cluster model. Claims 7-10 recite a method of training a market model. Claim 7 recites training a recognition engine using the collected data to recognize ingredients and product themes from the collected data; and training a market model based on the ingredients and the product themes. Claim 9 recites wherein extracting product ingredient and product theme data from the collected data comprises implementing a joint text and image embedding layer. Claim 10 recites wherein extracting product ingredient and product theme data from the collected data comprises implementing a multi-lingual embedding layer. Claim 11 recites a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 1. Claims 12-20 recite a machine learning system. Claim 12 recites a machine learning system for assessing product concepts, the system comprising: a processor; a data storage device storing: a market model comprising a machine learning model trained to provide relevance scores for product themes and product ingredients; and a cluster model; and a program storage device storing computer program instructions operable to cause the processor to generally perform the recited operations. Claim 17 further recites details of training the market model to provide relevance scores for product themes and product ingredients, including training a recognition engine using the collected data to recognize ingredients and product themes from the collected data; and training a market model based on the ingredients and the product themes. Claim 19 recites wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a joint text and image embedding layer. Claim 20 recites wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a multi-lingual embedding layer. The claims as a whole merely describe how to generally “apply” the abstract idea(s) in a computer environment. The claimed processing elements are recited at a high level of generality and are merely invoked as a tool to perform the abstract idea(s). Simply implementing the abstract idea(s) on a general-purpose processor is not a practical application of the abstract idea(s); Applicant’s specification discloses that the invention may be implemented using general-purpose processing elements and other generic components (Spec: p. 8: 1 – p. 9: 9). The use of a processor/processing elements (e.g., as recited in all of the claims) facilitates generic processor operations. The use of a memory or machine-readable media with executable instructions facilitates generic processor operations. The additional elements are recited at a high-level of generality (i.e., as generic processing elements performing generic computer functions) such that the incorporation of the additional processing elements amounts to no more than mere instructions to apply the judicial exception(s) using generic computer components. There is no indication in the Specification that the steps/functions of the claims require any inventive programming or necessitate any specialized or other inventive computer components (i.e., the steps/functions of the claims may be implemented using capabilities of general-purpose computer components). Accordingly, the additional elements do not integrate the abstract ideas into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea(s). The processing components presented in the claims simply utilize the capabilities of a general-purpose computer and are, thus, merely tools to implement the abstract idea(s). As seen in MPEP § 2106.05(a)(I) and § 2106.05(f)(2), the court found that accelerating a process when the increased speed solely comes from the capabilities of a general-purpose computer is not sufficient to show an improvement in computer-functionality and it amounts to a mere invocation of computers or machinery as a tool to perform an existing process (see FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)). Considering that the implementation of the machine learning model and/or the training of the model is performed using generic processing elements, such an implementation is presented as a generic recitation of machine learning in the claims and as a general link to technology. The machine learning-based processing elements are simply tools to generally automate the underlying process that could be performed by a human. It is further noted that, as described in Applicant’s Specification, the machine learning operations are generic machine learning operations. The Specification presents no assertion that there is any improvement in the automated machine learning process itself. Such a generic recitation of machine learning, as recited in the claims, is little more than automating an analogous process that can be performed by a human. There is no transformation or reduction of a particular article to a different state or thing recited in the claims. Additionally, even when considering the operations of the additional elements as an ordered combination, the ordered combination does not amount to significantly more than what is present in the claims when each operation is considered separately. 2B: Claim(s) Provide(s) an Inventive Concept? No – The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception(s). As discussed above with respect to integration of the abstract idea(s) into a practical application, the use of the additional elements to perform the steps identified in Step 2A – Prong 1 above amounts to no more than mere instructions to apply the exceptions using a generic computer component(s). Mere instructions to apply an exception using a generic computer component(s) cannot provide an inventive concept. The claims are not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 11-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.). [Claim 12] Camburn discloses a machine learning system for assessing product concepts (p. 8: Section 5 – “In this paper, a method is presented for automatically scoring design concepts through ML-based creativity metrics from a large set of crowdsourced design concepts using a machine learning-based approach. Ontological properties are extracted from concepts using a machine learning package trained on the FreeBase knowledge graph, Wikipedia, and a variety of structured and unstructured data, using the Textrazor API, while Textrazor includes the required functions there are named entity recognition approaches and software available with comparable function. Metrics to support creativity assessment are proposed based on analysis of the outputs from the APL The method and equations for determining these metrics from outputs of the ontological analysis [23] are provided.” It is understood that the use of machine learning implemented with software and APIs would necessarily incorporate a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out the disclosed operations), the system comprising: a processor (p. 8: Section 5, as explained above); a data storage device storing: a market model comprising a machine learning model trained to provide relevance scores for product themes and product ingredients; and a cluster model (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”; p. 8: Section 5, as explained above; p. 6: Section 3.2 – “Leveraging this insight, a strategy is provided to assist designers in rapid down selection. Each design concept is tagged with multiple entities; one of these is defined as the Core Entity or entity with the highest relevance to the Topic. In other words, this Entity is most representative of the idea while being more specific than a Topic. To identify the Core Entity, simply extract the Core Entity with the highest Relevance to the Topic of the design concept. Rcore_entity = max(Ri, Rn) [Wingdings font/0xE0] Core_Entity (3) where Rcore_entity is the relevance of the Core Entity. The relevance of the Core Entity is the maximum relevance score of any entity in the solution to the Topic of the concept.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.); and a program storage device storing computer program instructions operable to cause the processor (p. 8: Section 5, as explained above) to: receive a plurality of candidate product concepts, each candidate product concept comprising a natural language name and description of a candidate product (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”); for each candidate product concept, extract product ingredient data and product theme data, the product ingredient data indicating at least one product ingredient and the product theme data indicating at least one product theme (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.); for each candidate product concept, determining a relevance metric using the market model, and determine an originality metric using the cluster model (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”); p. 8: Section 5, as explained above; p. 6: Section 3.2 – “Leveraging this insight, a strategy is provided to assist designers in rapid down selection. Each design concept is tagged with multiple entities; one of these is defined as the Core Entity or entity with the highest relevance to the Topic. In other words, this Entity is most representative of the idea while being more specific than a Topic. To identify the Core Entity, simply extract the Core Entity with the highest Relevance to the Topic of the design concept. Rcore_entity = max(Ri, Rn) [Wingdings font/0xE0] Core_Entity (3) where Rcore_entity is the relevance of the Core Entity. The relevance of the Core Entity is the maximum relevance score of any entity in the solution to the Topic of the concept.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.); rank the candidate product concepts in a ranking order according to the respective relevance metrics and originality metrics (p. 7: Section 4.2 – The highest ranked designs are identified, including based on novelty.; p. 8: Section 4.3.2 – “Next, a supplementary pilot study was conducted to take a closer look at the two top scoring concepts with the Core Entity, "bus," cluster to search for innovative bus-related technologies. The highest scoring concept, that was also tagged with the Core Entity, "bus" was a form of social network for ridesharing network based on the rider's workplace.”; pp. 6-7: Section 3.2 – Novelty is determined and taken into account in the related analyses.); and generate a ranked list of candidate product concepts according to the ranking order (p. 7: Section 4.2 – The highest ranked designs are identified.). [Claim 1] Claim 1 recites limitations already addressed by the rejection of claim 12 above; therefore, the same rejection applies. [Claim 11] Claim 11 recites limitations already addressed by the rejection of claim 12 above; therefore, the same rejection applies. Furthermore, Camburn discloses a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out the disclosed operations (Camburn: p. 8: Section 5 – “In this paper, a method is presented for automatically scoring design concepts through ML-based creativity metrics from a large set of crowdsourced design concepts using a machine learning-based approach. Ontological properties are extracted from concepts using a machine learning package trained on the FreeBase knowledge graph, Wikipedia, and a variety of structured and unstructured data, using the Textrazor API, while Textrazor includes the required functions there are named entity recognition approaches and software available with comparable function. Metrics to support creativity assessment are proposed based on analysis of the outputs from the APL The method and equations for determining these metrics from outputs of the ontological analysis [23] are provided.” It is understood that the use of machine learning implemented with software and APIs would necessarily incorporate a computer readable medium storing processor executable instructions which when executed on a processor cause the processor to carry out the disclosed operations.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.), as applied to claims 1 and 12 above, in view of Kale et al. (US 2022/0121702). [Claims 2, 13] Camburn discloses wherein the program storage device further stores computer program instructions operable to cause the processor to (p. 8: Section 5, as explained above) extract product ingredient data and product theme data using a recognition model (p. 8: Section 5 – “Ontological properties are extracted from concepts using a machine learning package trained on the FreeBase knowledge graph, Wikipedia, and a variety of structured and unstructured data, using the Textrazor API, while Textrazor includes the required functions there are named entity recognition approaches and software available with comparable function. Metrics to support creativity assessment are proposed based on analysis of the outputs from the API.”; p. 7: Section 4.1 – “Second, the input data are fed as natural language…”; p. 4: Section 2.2 – “To classify unique entities, and their contextual dependency at the sentence level, a dependency parse tree is established for the lemma of each term. This parse tree establishes part of speech and, through association of subject and object, enables training of a recurrent neural network to infer, probabilistically, the sense of a word (see Fig. 3). For instance, dependency parsing enables a trained hierarchy to differentiate between "Ford" the motor car company and "ford" a shallow segment of a river through contextual information, e.g., the sentence also contains "motor"' in a given clause. These tagging requirements were met by leveraging the Textrazor platform [23], where in-depth supporting literature includes [24, 27-29, 36].”). Camburn does not explicitly disclose wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient data and product theme data using an embedding model. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient data and product theme data using an embedding model in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. [Claims 3, 14] Camburn does not explicitly disclose wherein the embedding model comprises mappings from a plurality of languages. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein the embedding model comprises mappings from a plurality of languages in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. Claims 4-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.), as applied to claims 1 and 12 above, in view of Zimmerman et al. (US 2010/0205034) in view of Pal et al. (US 2017/0286975). [Claims 4, 15] Camburn does not explicitly disclose wherein determining a relevance metric using a market model for each candidate product concept comprises: simulating a consumer response to each respective candidate product concept using a plurality of simulated agents which are configured to update the market model and thereby obtain a respective simulated market model for the respective candidate product concept, and determining the respective relevance metric from the respective simulated market model for the candidate product concept. Pal performs multiple iterations (i.e., iterative cycles) to simulate a spread of influence within a social network to identify influencers with the greatest influence as demonstrated by their reach (e.g., spread, degree of connections) in the social network within a given time period (Pal: fig. 3; ¶¶ 40, 56-57). Pal’s description of the related art discusses how social network platforms are used to advertise products (Pal: ¶¶ 4-5). While Pal does not explicitly use simulated agents and evaluate the consumer response in regard to a candidate product concept, Zimmerman explains that agents may be used to simulate interest in a product with certain attributes (Zimmerman: ¶¶ 15-17, 19-21). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein determining a relevance metric using a market model for each candidate product concept comprises: simulating a consumer response to each respective candidate product concept using a plurality of simulated agents which are configured to update the market model and thereby obtain a respective simulated market model for the respective candidate product concept, and determining the respective relevance metric from the respective simulated market model for the candidate product concept in order to assist Camburn in determine which product designs are more likely to appeal to customers in actual real-world situations while also facilitating the development of a successful marketing strategy for a new product and/or new product design. [Claims 5, 16] Camburn does not explicitly disclose wherein simulating the consumer response to the candidate product using a plurality of simulated agents comprises simulating interactions between simulated agents over a plurality of simulation cycles, wherein simulated agents of the plurality of simulated agents are configured to be connected to other connected simulated agents of the plurality of simulated agents and thereby influence behavior of the connected simulated agents. Pal performs multiple iterations (i.e., iterative cycles) to simulate a spread of influence within a social network to identify influencers with the greatest influence as demonstrated by their reach (e.g., spread, degree of connections) in the social network within a given time period (Pal: fig. 3; ¶¶ 40, 56-57). Pal’s description of the related art discusses how social network platforms are used to advertise products (Pal: ¶¶ 4-5). While Pal does not explicitly use simulated agents and evaluate the consumer response in regard to a candidate product concept, Zimmerman explains that agents may be used to simulate interest in a product with certain attributes (Zimmerman: ¶¶ 15-17, 19-21). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein simulating the consumer response to the candidate product using a plurality of simulated agents comprises simulating interactions between simulated agents over a plurality of simulation cycles, wherein simulated agents of the plurality of simulated agents are configured to be connected to other connected simulated agents of the plurality of simulated agents and thereby influence behavior of the connected simulated agents in order to assist Camburn in determine which product designs are more likely to appeal to customers in actual real-world situations while also facilitating the development of a successful marketing strategy for a new product and/or new product design. [Claim 6] Camburn does not explicitly disclose wherein a first simulated agent of the plurality of simulated agents has a higher number of connections than a second simulated agent of the plurality of simulated agents. Pal performs multiple iterations (i.e., iterative cycles) to simulate a spread of influence within a social network to identify influencers with the greatest influence as demonstrated by their reach (e.g., spread, degree of connections) in the social network within a given time period (Pal: fig. 3; ¶¶ 40, 56-57). Pal’s description of the related art discusses how social network platforms are used to advertise products (Pal: ¶¶ 4-5). While Pal does not explicitly use simulated agents and evaluate the consumer response in regard to a candidate product concept, Zimmerman explains that agents may be used to simulate interest in a product with certain attributes (Zimmerman: ¶¶ 15-17, 19-21). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein a first simulated agent of the plurality of simulated agents has a higher number of connections than a second simulated agent of the plurality of simulated agents in order to assist Camburn in determine which product designs are more likely to appeal to customers in actual real-world situations while also facilitating the development of a successful marketing strategy for a new product and/or new product design. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.) in view of Agarwal et al. (US 2022/0051479). [Claim 7] Camburn discloses a method of training a market model to provide relevance scores for product themes and product ingredients (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”); p. 8: Section 5, as explained above; p. 6: Section 3.2 – “Leveraging this insight, a strategy is provided to assist designers in rapid down selection. Each design concept is tagged with multiple entities; one of these is defined as the Core Entity or entity with the highest relevance to the Topic. In other words, this Entity is most representative of the idea while being more specific than a Topic. To identify the Core Entity, simply extract the Core Entity with the highest Relevance to the Topic of the design concept. Rcore_entity = max(Ri, Rn) [Wingdings font/0xE0] Core_Entity (3) where Rcore_entity is the relevance of the Core Entity. The relevance of the Core Entity is the maximum relevance score of any entity in the solution to the Topic of the concept.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.), the method comprising: extracting product ingredient data and product theme data from the collected data (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.); training a recognition engine using the collected data to recognize ingredients and product themes from the collected data (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”; p. 8: Section 5, as explained above; p. 6: Section 3.2 – “Leveraging this insight, a strategy is provided to assist designers in rapid down selection. Each design concept is tagged with multiple entities; one of these is defined as the Core Entity or entity with the highest relevance to the Topic. In other words, this Entity is most representative of the idea while being more specific than a Topic. To identify the Core Entity, simply extract the Core Entity with the highest Relevance to the Topic of the design concept. Rcore_entity = max(Ri, Rn) [Wingdings font/0xE0] Core_Entity (3) where Rcore_entity is the relevance of the Core Entity. The relevance of the Core Entity is the maximum relevance score of any entity in the solution to the Topic of the concept.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.); and training a market model based on the ingredients and the product themes (p. 3: Section 2.2 – “This section reports key functions and design of the machine learning-based toolchain used to extract ontological data from crowdsourced design concepts. Machine learning enables quantitative measurement of ontological properties of input text. For this method, the ontological properties of interest are the topic of the sentence, what entities are mentioned, and a measure of how related these entities are to the topic. In this method, sentence based or "natural language" descriptions of design concepts are analyzed. Categories are important as elements for creating tags at the sentence level. A neural network is trained for this purpose by matching dependency parse trees to determine broader classifications.”; p. 8: Section 5, as explained above; p. 6: Section 3.2 – “Leveraging this insight, a strategy is provided to assist designers in rapid down selection. Each design concept is tagged with multiple entities; one of these is defined as the Core Entity or entity with the highest relevance to the Topic. In other words, this Entity is most representative of the idea while being more specific than a Topic. To identify the Core Entity, simply extract the Core Entity with the highest Relevance to the Topic of the design concept. Rcore_entity = max(Ri, Rn) [Wingdings font/0xE0] Core_Entity (3) where Rcore_entity is the relevance of the Core Entity. The relevance of the Core Entity is the maximum relevance score of any entity in the solution to the Topic of the concept.”; pp. 6-7: Section 3.2 – “For a design to be valuable, it should meet certain criteria. Ideation metrics should enable the assessment of whether a design meets the criteria for value. A design that is novel and well described to implement may be valuable. Thus, we propose metrics corresponding to evaluating these characteristics in design ideation data. Recent research identifies a feature of novel designs as being drawn from analogies to another domain [31] or the synthesis of common and uncommon combinations [9]. In the tagging process, each entity is assigned a relevance score to the Core Entity. The Core Entity is determined through probabilistic mapping of entities and syntactic dependency using the Textrazor [23] knowledge graph. The relevance of each entity to the central topic is reported based on the network distance, or path length [39], between two given topics in the FreeBase knowledge graph [40], as determined by the ML analysis provided by Textrazor. Thus, the proposed metric to be used as a proxy for novelty at the concept level is "Span." Span is taken herein to be the sum of distances of each entity in the concept to the central named Entity, as measured by minimum spanning path within the knowledge graph. This aligns with standard novelty measures [7,9]. In a simple manner, distance in a network is the number of edges (steps) between two vertices (nodes or keywords).” Distance and relevance scores are two examples of scores that convey relevance. Product themes and product ingredients are two aspects of a product, including related entity, features, description of the product, novelty measures, relevant category, etc. Evaluating the distance between data, such as topics, is an example of using a cluster model.). Camburn seeks human user feedback to compare to the machine learning results (Camburn: p. 2: Section 1.1); however, Camburn does not explicitly perform the step of monitoring social network posts to collect data. Agarwal discloses that public sentiment toward certain apparel designs may be gleaned from comments made on and retrieved from social media (Agarwal: ¶ 47). The nature of identified keywords are indicators (i.e., market sensors) representative of public sentiment related to design elements of the apparel (Agarwal: ¶ 47). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn to perform the step of monitoring social network posts to collect data in order to enhance the body of data made available to Camburn to help train the machine learning models, thereby improving the accuracy of the machine learning and corresponding models. [Claim 8] Camburn seeks human user feedback to compare to the machine learning results (Camburn: p. 2: Section 1.1); however, Camburn does not explicitly disclose wherein monitoring social network posts to collect data comprises implementing a plurality of market sensors. Agarwal discloses that public sentiment toward certain apparel designs may be gleaned from comments made on and retrieved from social media (Agarwal: ¶ 47). The nature of identified keywords are indicators (i.e., market sensors) representative of public sentiment related to design elements of the apparel (Agarwal: ¶ 47). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein monitoring social network posts to collect data comprises implementing a plurality of market sensors in order to enhance the body of data made available to Camburn to help train the machine learning models, thereby improving the accuracy of the machine learning and corresponding models. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.), as applied to claim 12 above, in view of Agarwal et al. (US 2022/0051479). [Claims 17-18] Claims 17-18 recite limitations already addressed by the rejections of claims 7-8 and 12 above; therefore, the same rejections apply. Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.) in view of Agarwal et al. (US 2022/0051479), as applied to claim 7 above, in view of Kale et al. (US 2022/0121702). [Claim 9] Camburn does not explicitly disclose wherein the collected data comprises text data and image data and wherein extracting product ingredient and product theme data from the collected data comprises implementing a joint text and image embedding layer. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein the collected data comprises text data and image data and wherein extracting product ingredient and product theme data from the collected data comprises implementing a joint text and image embedding layer in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. [Claim 10] Camburn does not explicitly disclose wherein extracting product ingredient and product theme data from the collected data comprises implementing a multi-lingual embedding layer. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein extracting product ingredient and product theme data from the collected data comprises implementing a multi-lingual embedding layer in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Camburn et al. (B. Camburn et al, "Machine Learning-Based Design Concept Evaluation," Journal of Mechanical Design, 142, 031113, March 2020.) in view of Agarwal et al. (US 2022/0051479), as applied to claims 12 and 17 above, in view of Kale et al. (US 2022/0121702). [Claim 19] Camburn does not explicitly disclose wherein the collected data comprises text data and image data and wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a joint text and image embedding layer. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein the collected data comprises text data and image data and wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a joint text and image embedding layer in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. [Claim 20] Camburn does not explicitly disclose wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a multi-lingual embedding layer. Kale discloses that a cross-lingual-multimodal-embedding model may be used to generate a text embedding with a cross-lingual embedding space, including for text embeddings of multiple languages (Kale: ¶ 64). Additionally, Kale explains, “As further shown in FIG. 3, the cross-lingual image search system 106 utilizes one or more neural network layers of the cross-lingual-multimodal-embedding model 304 to transform the text embedding 308 into a cross-lingual-multimodal embedding 314 for the text 302 within a multimodal embedding space 316.” (Kale: ¶ 69) Translations are easily facilitated (Kale: ¶ 65 – “As an example, in some embodiments, the cross-lingual image search system 106 generates a text embedding for a first text in a first language that differs from a text embedding for a second text in a second language even where the first text corresponds to a translation of the second text into the first language.”). Kale further explains some of the benefits of the disclosed invention as follows: “In some instances, the disclosed systems train the cross-lingual image retrieval model using a multimodal metric loss function that tightens embedding clusters by pushing embeddings for dissimilar texts and digital images away from one another. Thus, the disclosed systems flexibly train a cross-lingual image retrieval model without reliance on input-text data from multiple languages. Further, the disclosed systems more accurately identify images that are relevant to a query for improved digital image retrieval.” (Kale: ¶ 2). The Examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s invention to modify Camburn wherein the program storage device further stores computer program instructions operable to cause the processor to extract product ingredient and product theme data from the collected data by implementing a multi-lingual embedding layer in order to enhance the ability of Camburn to more conveniently gather and more accurately analyze larger bodies of potentially relevant information, including information in varying formats (such as text, image, etc.) as well as in various languages. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hall et al. (US 2003/0225786) – Ranks new concepts from best to worst and simulates customer reaction. Garner et al. (US 2018/0129755) – Calculates an emergence score for terms representing technological emergence. Dixit (US 2016/0012557) -- Evaluates novelty of an idea. Liu et al. (US 2020/0401627) – Uses embedding layers to train a neural network. Li et al. (US 2020/0250537) – Uses a text embedding model with neural network layers. Masuyama et al. (US 2006/0122849) – Identifies novel technology. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUSANNA M DIAZ whose telephone number is (571)272-6733. The examiner can normally be reached M-F, 8 am-4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571) 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUSANNA M. DIAZ/ Primary Examiner Art Unit 3625A
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Dec 27, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548039
SYSTEM AND METHOD FOR ESTIMATING IN-STORE DEMAND BASED ON ONLINE DEMAND
2y 5m to grant Granted Feb 10, 2026
Patent 12541751
Robot Fleet Management with Workflow Simulation for Value Chain Networks
2y 5m to grant Granted Feb 03, 2026
Patent 12450620
METHODS AND APPARATUS TO GENERATE AUDIENCE METRICS USING MATRIX ANALYSIS
2y 5m to grant Granted Oct 21, 2025
Patent 12380377
Intelligent Guidance System for Queues
2y 5m to grant Granted Aug 05, 2025
Patent 12380380
INTELLIGENT SCHEDULE MANAGEMENT AND ZONE MONITORING SYSTEM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
31%
Grant Probability
51%
With Interview (+20.5%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 689 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month