DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a Final Office Action in response to communications received on 9/19/2025. Claims 1-9 and 11-12 are currently pending and have been examined. Claims 1, 9, 11, and 12 have been amended. Claim 10 has been cancelled.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1-9 are a system, claim 11 is a method, and claim 12 is a computer readable medium. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1-9 and 11-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claim (1 and 12, taking claim 1 as a representative claim) recite:
A product design generation support device comprising: a memory configured to store instructions; and at least one processor configured to execute the instructions to:
acquire a design image related to a design of a target product;
estimate line-of-sight information and opinion information of a target person with respect to the design of the target product inputting the design image and vectorized test result information to an estimation model;
generate, based on the target person, the target product, and the opinion information, a graph;
convert nodes and edges of the graph into embedded vector representations;
wherein the vectorized test result information is based on a central location test, and wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product; and
output a visualization on a display device, the visualization including the design image of the target product, the estimated line-of-sight information, the estimated opinion information, and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
Claim 11 recites A product design generation support method performed by a computer, the product design generation support method comprising:
acquiring a design image related to a design of a target product;
estimating line-of-sight information and opinion information of a target person with respect to the design of the target product inputting the design image and vectorized test result information to an estimation model;
generate, based on the target person, the target product, and the opinion information, a graph;
vectorize, by a graph artificial intelligence, the graph; and
outputting the line-of-sight information and the opinion information,
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product;
and outputting a visualization on a display device, the visualization including the design image of the target product, the estimated line-of-sight information, the estimated opinion information, and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
These above, except for the emphasized limitations, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for estimating the line of sight for a person and the opinion of the person based for a given design of a product using an estimating model. The steps under its broadest reasonable interpretation specifically fall under advertising, marketing, and sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
These limitations above under their broadest reasonable interpretations, recite a mental process as the limitations could be performed through observation and judgement. The claimed invention recites steps for estimating the line of sight for a person and the opinion of the person based for a given design of a product using an estimating model which could be performed by a person observing another person and recording the interactions and responses to the design of the product and performing an estimation determination by hand. The determination of the vectorized test result, generation of a graph, and conversion of nodes and edges of the graph into embedded vector representations could also be calculated on pen and paper. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of:
A product design generation support device comprising: a memory configured to store instructions; and at least one processor configured to execute the instructions to (claim 1):
A product design generation support method performed by a computer, the product design generation support method comprising (claim 11):
A non-transitory computer readable medium storing a computer program that causes a computer to execute (claim 12):
by machine learning
on a display device
and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer
by a graph artificial intelligence (claim 11)
The additional elements of emphasized above are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. These limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f).
Examiner notes that the recitation of “by machine learning” and “by graph artificial intelligence” provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. That the model is machine learned is used to generally apply the abstract idea without placing any limits on how the machine learned model functions. Rather, these limitations only recite the outcome of outputting a visualization including the design image of the target product, the estimated line-of-sight information, the estimated opinion information, and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion and does not include any details about how the “estimation” is accomplished. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis.
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component.
Dependent claims 2-9 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1, 11 and 12 without significantly more.
Claim 2 recites wherein the at least one processor is further configured to estimate a reason for the estimation of the opinion information based on the line-of-sight information, and output the reason for the estimation. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 3 recites wherein the opinion information includes an opinion of the target person on whether to purchase the target product. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 4 recites wherein the opinion information includes a text expressing an opinion of the target person with respect to the design of the target product. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 5 recites wherein the line-of-sight information includes at least one of a viewing time for which the target person looks at the target product, the number of viewing times that the target person looks at the target product, and a viewing rate that is a ratio of the time for which the target person looks at the target product or the number of times that the target person looks at the target product to a time for which the target person looks at a plurality of products including the target product or the number of times that the target person looks at the plurality of products including the target product. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 6 recites wherein the at least one processor is further configured to acquire target product attribute information for the target product and target person attribute information for the target person, estimate the opinion information with respect to the design of the target product using the design image, the target person attribute information, the target product attribute information, and the estimation model, and output the opinion information. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 7 recites wherein the target product is a product displayed on a shelf, and the target product attribute information includes a position at which the target product is displayed on the shelf. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 8 recites wherein the target person attribute information is an attribute of the target person having a specific attribute, and the at least one processor is configured to estimate the opinion information and the line-of-sight information of the target person having the specific attribute with respect to the design of the target product. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
Claim 9 recites wherein the at least one processor is configured further to acquire a plurality of design images in which different designs are shown, estimate the opinion information and the line-of- sight information with respect to the respective designs shown in the plurality of design images, output, as information to assist in decision making for determining design of a product, ranks of the designs based on at least one of the opinion information and the line-of-sight information. The claim merely further limits that abstract idea and recites no further additional elements. Therefore the limitation does not recite significantly more to integrate the judicial exception into a practical application.
For these reasons the claims are rejected under 35 USC 101.
Subject Matter Free of Prior Art
Claims 1, 11 and 12 are determined to have overcome the prior art of rejection and are free of prior art, however the claims remain rejected under 35 USC 101, as set forth above.
The independent claims as amended are found to overcome the prior art rejection for the reasons set forth below.
Claims 1 and 12 now recites the additional claimed features of:
converting nodes and edges of the graph into embedded vector representations;
wherein the vectorized test result information is based on a central location test, and
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product t, the opinion information is information for learning of the person with respect to the design of the product;
and outputting a visualization on a display device, the visualization including the design image of the target product, the estimated line-of-sight information, the estimated opinion information, and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
Claim 11 recites the additional claimed features of:
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product;
and outputting a visualization on a display device, the visualization including the design image of the target product, the estimated line-of-sight information, the estimated opinion information, and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
The closest prior art was found to be as follows:
Previously cited Moon additionally teaches an emotion sensitive feature vector (element 962) used to estimate user’s responses to items they are interacting with at a store.
Previously cited Edwards additionally teaches “[0054] Further, the virtual reality simulations used for market research may be customized to conduct a broad variety of research related to consumer shopping and purchasing decisions. For example, the effects of different store clientele demographics may be researched, such as how the distribution of products between "high" and "low" end brands effects whether higher-income individuals are more (or less) less likely to shop in stores where the predominant cross section of goods are purchased by lower-income individuals. [0052] For example, "hot spots" of visual attention may be recolored or have a colored cloud (optionally having a degree of opacity such as from 10% to 90% transparent) superimposed over the hot spots, with the extent of gazing (e.g., cumulative time spend gazing at the region) being represented in terms of the size, color, color intensity or opacity of the cloud, to provide a readily recognized graphical means of presenting eye gaze data on the display and [0035] The eye-tracking information may be provided to presentation program 110. In turn, the smart objects being viewed may record information reflecting that the participant viewed a given object. By recording this information over time, the path of the participant's gaze may be captured and played back to a researcher, product manufacturer/purchaser, retailer or other relevant party. Such visualizations may be used to identify "hot spots" within the virtual shopping environment, i.e., elements of the environment that attracted the attention of one or more participants, as reflected by the monitored eye movements of such participants. Additionally, other physical responses of the participant may be recorded to evaluate an overall emotional reaction a given participant has to elements of the virtual shopping environment. For example, to determine whether someone is offended by provocative literature presented at a checkout stand. In one embodiment, in addition to an eye-tracking system, the appropriate monitoring devices 112 could monitor a participant's respiration, blood pressure, pulse, galvanic skin response, body motion, muscle tension, etc. For example, the instrumented device 106 could include a sensor configured to monitor the heart rate of a participant. In such a case, once the simulation was completed, the heart rate and eye-tracking data could be correlated with one another.”
However, the previously cited combination of references does not teach the additional features of the amended claims including:
Claims 1 and 12 now recites the additional claimed features of:
converting nodes and edges of the graph into embedded vector representations;
wherein the vectorized test result information is based on a central location test, and
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product , the opinion information is information for learning of the person with respect to the design of the product;
and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
Claim 11 recites the additional claimed features of:
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product;
vectorize, by a graph artificial intelligence, the graph
and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
The closest prior art for these remaining features is as follows:
Brooks US 2019/0370874 discloses using partiality vectors to determine how a user views an item ([0068] The control circuit can then generate an output comprising identification of the particular product by evaluating the partiality vectors and the vectorized characterizations against the set of rules.[0069] The aforementioned set of rules can include, for example, comparing at least some of the partiality vectors for the particular person to each of the vectorized characterizations for each of the candidate products using vector dot product calculations.). One approach taken for collecting a person’s partiality is that or an online survey or questionnaire (see [0087]). While the reference discloses employs a vector based approach for characterizing partialities of users to products, the reference does not disclose the claimed invention, in particular, the data collection is not that from a central location testing process.
“Central Location Testing” discloses the defined parameters of central location testing for quantitative research purposes, however does not disclose the claim invention. In particular, the reference does not disclose the vectorization process, and machine learning applied to embeddings representing the product information, estimated line of sight and opinion information.
Yap US 20190362220 discloses a collaborative filtering process that uses an “[0035] An attribute embedding lookup operation (e.g., the attribute embedding lookup 210 of FIG. 2) retrieves the vector for each attribute, resulting in a k×M user matrix (E.sub.u), and a k×N item matrix (E.sub.i). M represents the number of attributes a particular user has, and N represents the number of attributes an item has. An attention mechanism (e.g., the attention layer 212 of FIG. 2) constructs a user vector representation (z.sub.u), and an item vector representation (z.sub.i).” However, the reference does not disclose the invention as claimed and in particular, wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product;
Weinberg US 20220197248 discloses the collaboration for co-creation of a product between users and manufacturers (see [0028]). A model receives feedback that can be implemented into the design and the co-creation process can use machine learning, user embeddings, and item embeddings to optimize the design. However, the reference does not disclose the invention as claimed. In particular does not disclose, converting nodes and edges of the graph into embedded vector representations; wherein the vectorized test result information is based on a central location test, and wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product , the opinion information is information for learning of the person with respect to the design of the product;
Burtenshaw US 10001898 the reference discloses (see Col. 17 lines 25-30) the functionality of summary data visualization is selected by the user 100, the user 100 may be presented by the system 400 with a cue such as a cursor change, pop-up message, pop-up icon, information retrieval icon, data visualization highlighting, a sound, and/or any other GUI and/or text element that indicates and/or display of the related information 450. The user 100 may then be able to confirm that he or she desires to retrieve and/or display the information.
It was found that no references alone or in combination, neither anticipates, reasonable teaches, nor renders obvious the below noted features of Applicant’s invention. The features of claims 1, 11 and 12 in combination that overcome the prior art are:
Claims 1 and 12 now recites the additional claimed features of:
converting nodes and edges of the graph into embedded vector representations;
wherein the vectorized test result information is based on a central location test, and
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product , the opinion information is information for learning of the person with respect to the design of the product;
and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
Claim 11 recites the additional claimed features of:
wherein the estimation model is a model obtained by performing machine learning on the embedded vector representations to learn relations of product information and attribute information with line-of-sight information and opinion information, the product information is information for learning including a design image of a product, the attribute information is information for learning of the person, the line-of-sight information is information for learning of the person with respect to the design of the product, the opinion information is information for learning of the person with respect to the design of the product;
vectorize, by a graph artificial intelligence, the graph
and a pop-up display of the estimated reason appearing when a user designates a specific portion of the opinion information with a pointer and relating the opinion information corresponding the designates specific portion.
Therefore, none of the cited references disclose or render obvious each and every feature of the claimed invention and the claimed invention is determined to be free of the prior art. Although individually concepts of the claimed features could be taught, any combination of references would teach the claimed limitations using a piecemeal analysis, since references would only be combined and deemed obvious based on knowledge gleaned from the applicant's disclosure. Such a reconstruction is improper (i.e., hindsight reasoning). See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). The examiner emphasizes that it is the interrelationship of the limitations that renders these claims free of the prior art/additional art.
Therefore, it is hereby asserted by the Examiner that, in light of the above, that the claims are free of prior art as the references do not anticipate the claims and do not render obvious any further modification of the references to a person of ordinary skill in art.
Response to Arguments
Applicant’s arguments, filed 9/19/2025 with respect to the prior art rejections have been fully considered and are persuasive. The rejection under 35 USC 103 has been withdrawn for the reasons set forth above. The amendment was not found to be taught in the previously cited prior art nor found in the updated search.
Applicant's arguments filed 9/19/2025 have been fully considered but they are not persuasive with respect to the rejection under 35 USC 101.
With respect to the remarks directed to the “alleged judicial exception is integrated into a practical application of an improved machine learning system offering more effective and reliable image analysis”, the examiner asserts that the alleged improvement lies in the abstract idea and not in the machine learning itself. The data itself might be more accurate through the use of the recited machine learning techniques, however the machine learning techniques themselves are not improved. The disclosure of the machine learning/AI and the claiming of the machine learning/AI is recited at a high level of generality and using generic machine learning/AI. One example from the specification is the use of “deep learning” (see [0033] and [0049]). The claimed invention merely applies machine learning to provide a more reliable data set that is output on a display. The benefit could be improved designs; however, this improves that abstract idea alone.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTORIA E. FRUNZI
Primary Examiner
Art Unit TC 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 11/4/2025