DETAILED ACTION
This action is in response to a filing filed on October 24th, 2024. Claim 1 is cancelled and new claims 2-21 is/are added. Claims 2-21 have been examined in this application. The Information Disclosure Statement (IDS) filed on October 24th, 2024 has been acknowledged.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 2-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more.
Step 1: Claims 2-14 is/are drawn to method (i.e., a process), and claims 15-21 is/are drawn to system (i.e., a manufacture). (Step 1: YES).
Step 2A - Prong One: In prong one of step 2A, the claim(s) is/are analyzed to evaluate whether it/they recite(s) a judicial exception.
Claim 2: A method comprising:
causing to be captured, using a computing device, visual content data depicting a subject;
receiving, from the computing device, a selection of a first product;
generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid;
transmitting, to the computing device, the first interactive visualization for display at an interface of the computing device;
receiving, from the computing device, an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product;
receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device;
based at least in part on the bio-feedback data, determining one or more preferences toward the first product and the changed product attribute;
identifying a second product based on the one or more preferences towards the first product and the changed product attribute;
generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid;
and transmitting, to the computing device for display at the interface, the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject.
Claim 9: A method comprising: capturing, at a computing device, visual content data depicting a subject;
detecting, at the computing device, a selection of a first product;
generating, for display at an interface of the computing device and based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid;
detecting, at the computing device, an interaction with the first interactive visualization at the interface that changes a product attribute of the first product;
capturing, at the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device;
based at least in part on the bio-feedback data, determining, at the computing device, one or more preferences toward the first product and the changed product attribute;
identifying, at the computing device, a second product based on the one or more preferences towards the first product and the changed product attribute;
generating, for display at the interface, a second interactive visualization, comprising modified content depicting the subject at which a representation of the second product is overlaid, wherein the representation of the second product is displayed at least partially over the subject.
(Examiner notes: The underlined claim terms above are interpreted as additional elements beyond the abstract idea and are further analyzed under Step 2A - Prong Two)
Under their broadest reasonable interpretation, the claims are directed to an abstract idea of collecting information about a user, analyzing that information to determine preferences, and using the results of the analysis to present recommendations, which is a form of mental process and commercial interaction implemented on a generic computing device. In particular, the claims recite capturing visual content depicting a subject, receiving a selection of a product, detecting interactions that modify a product attribute, receiving bio-feedback data indicative of user reaction, determining user preferences based on that bio-feedback, and identifying and presenting a second product based on the determined preferences. These steps amount to observing user behavior, evaluating the user’s response to product features, and recommending another product accordingly, activities that can be performed mentally or with pen and paper, or that reflect fundamental practices which constitutes as fundamental economic principles or practices (including hedging, insurance, mitigating risk) i.e. tailoring recommendations based on customer reactions as recited in the independent claims 2 and 9 and falls under “Certain Methods Of Organizing Human Activity”. From applicant’s specification, the claimed invention is implemented to “FIG. 2 depicts an illustrative embodiment illustrating aspects of providing a product recommendation based on bio-feedback captured from subject interactions with the simulated visualization described in FIG. 1, according to some embodiments described herein. Diagram 200 shows an example screen of user equipment 114, which illustrates a product recommendation 121 of another lipstick product “Vincent Logo lip stain coral” 122 which has a similar color “coral” with the product “Revlon coral” 113 that has been virtually tried on” and “control circuitry transmits a query based on the product feature (such as the lip color “coral” in FIG. 1) to the product databases (e.g., 219 in FIG. 3), and receives a product recommendation having the same product feature” (see at least [0026 and 0044] of instant specification). The steps under its broadest reasonable interpretation specifically directed to an abstract idea of financial management and economic coordination implemented on generic computer infrastructure, which is an instance of certain methods of organizing human activity. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
And the dependent claims 3-8, 10-14, and 16-21 recites an abstract idea, including various forms of user behavior or bio-feedback leveraged to deduce preferences, including movement patterns, facial expressions, lip movements, eye movements, or gaze direction, and linking these behaviors to specific sections of a displayed product or topic. The claims focus on the idea of observing human behavior to gauge user interest or preferences, i.e. a mental process that a human can do by watching where a user looks, how they move, or their reactions to a product feature. Dependent claims 10–14 further elaborate on the abstract idea by reciting associations between detected user behavior and products or product attributes, and by determining preferences based on those associations. These claims are directed to the abstract concept of linking observed user actions to items of interest and drawing conclusions about preferences, which is a form of data analysis and organization of information. Such activities reflect fundamental mental processes, such as remembering what a person looked at or interacted with and concluding that the person prefers that item. Further, dependent claims 16–21 further narrow the abstract idea by specifying that the visual content data comprises image data and that the modified content depicts changes to particular areas of the subject, such as facial areas, based on selected products. These claims are directed to the abstract concept of presenting customized visual representations to reflect user-selected or recommended products, which constitutes the display of information tailored to user preferences. As such, the claims are directed to an abstract idea involving organizing human activity related to commerce and product recommendation, which falls within a judicial exception under 35 U.S.C. §101.
Independent claim(s) 15 recite/describe nearly identical steps (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this/these claim(s) is/are therefore determined to recite an abstract idea under the same analysis.
As such, the Examiner concludes that claims 2 and 9 recites an abstract idea (Step 2A – Prong One: YES).
Step 2A - Prong Two: In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “addition element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception.
The requirement to execute the claimed steps/functions using a computing device, sensor, interface, transceiver and control circuitry, interactive visualization, etc. (Claims 2, 9, and 15) is/are equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer.
Similarly, the limitations of using a computing device, sensor, interface, transceiver and control circuitry, interactive visualization, etc. (Claims 2, 9, and 15, and dependent claims 3-8, 10-14, and 16-21) are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(f)).
Further, the additional limitations beyond the abstract idea identified above, serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, it/they serve(s) to limit the application of the abstract idea to computerized environments (e.g., receive, capture, detect, generate, transmit, identify, etc. steps performed by a computing device, sensor, interface, transceiver and control circuitry, interactive visualization, etc.). This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(h)).
The recited additional element(s) of captured, using a computing device, visual content data depicting a subject; receiving, from the computing device, a selection of a first product; transmitting, to the computing device, the first interactive visualization for display at an interface of the computing device; receiving, from the computing device, an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product; receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device; identifying a second product based on the one or more preferences towards the first product and the changed product attribute; and transmitting, to the computing device for display at the interface, the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject (Independent Claims 1, 9, and 15), additionally and/or alternatively simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application. (See MPEP 2106.05(g)).
Dependent claims 3-8, 10-14, and 16-21 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e., they are part of the abstract idea recited in each respective claim).
The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claim(s) is/are directed to an abstract idea (Step 2A – Prong two: NO).
Step 2B: In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, is/are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an "inventive concept." An "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 134 S. Ct. at 2355, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966).
As discussed above in “Step 2A – Prong 2”, the identified additional elements in independent Claim(s) 1, 9, and 15, and dependent claims 3-8, 10-14, and 16-21 are equivalent to adding the words “apply it” on a generic computer, and/or generally link the use of the judicial exception to a particular technological environment or field of use. Therefore, the claims as a whole do not amount to significantly more than the judicial exception itself.
The recited additional element(s) of captured, using a computing device, visual content data depicting a subject; receiving, from the computing device, a selection of a first product; transmitting, to the computing device, the first interactive visualization for display at an interface of the computing device; receiving, from the computing device, an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product; receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device; identifying a second product based on the one or more preferences towards the first product and the changed product attribute; and transmitting, to the computing device for display at the interface, the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject (Independent Claims 1, 9, and 15), additionally and/or alternatively simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea) i.e. the claims recite steps such as capturing visual content data, receiving a selection, identifying a product, and transmitting interactive visualizations for display. These limitations constitute insignificant extra-solution activity that merely collects inputs and outputs results associated with the alleged abstract idea of determining user preferences and recommending products, which is similar to “Receiving or transmitting data over a network, e.g., using the Internet to gather data”, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), “Storing and retrieving information in memory”, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; “Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price”, OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93, Determining an estimated outcome and setting a price, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93, is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here) (See MPEP 2106.05(d) (II)).
This conclusion is based on a factual determination. Applicant’s own disclosure at paragraph [0007-0008] acknowledges that “The recommendation engine captures an image or video of a subject’s movement (including facial movement) and generates a movement pattern or facial expression pattern from the captured image or video content. The recommendation engine then uses the pattern to identify the movement or facial expression, and then identifies an emotion associated with the identified movement … Based on the particular feature and the identified emotion, the recommendation engine recommends a product having the same particular product feature if the bio-feedback shows positive emotion” (i.e., conventional nature of receiving and transmitting data/messages over a network). This additional element therefore do not ensure the claim amounts to significantly more than the abstract idea.
Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer or/and append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity) and/or simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception.
The dependent claims 2-8, 10-14, and 16-21 fail to include any additional elements. In other words, each of the limitations/elements recited in respective independent claims is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e., they are part of the abstract idea recited in each respective claim).
Claims 2-8 merely specify particular types of bio-feedback or user behavior, such as movement patterns, facial expressions, lip movements, or gaze direction, used to infer user preferences. Limiting the abstract idea to specific forms of human behavior or physiological response does not impose a meaningful limit on the abstract idea, as these forms of bio-feedback merely represent additional data inputs to the same preference-analysis process. The dependent claims do not recite any improvement to sensor technology, image processing, or bio-feedback analysis techniques. Instead, they continue to rely on generic computing devices and sensors performing their ordinary functions of detecting movement, facial expressions, or gaze. Claims 10–14 further elaborate on the abstract idea by reciting associations between user behavior and products or product attributes, and by determining preferences based on such associations. The recited associations and preference determinations are performed using generic computer components executing conventional data-processing functions. The claims do not recite any particular data structure, algorithmic improvement, or technical mechanism that enhances the operation of the computing device itself. Instead, they merely automate a process of tracking user interest and drawing conclusions about preferences. Claims 16-21 recite that the visual content data comprises image data and that modified content depicts changes to particular areas of the subject, such as facial areas, based on selected or recommended products. These claims merely specify how the results of the abstract idea are presented or visualized to the user. The claims do not recite any improvement to image rendering, computer graphics, or display technology. Instead, they use conventional visualization techniques to present information to the user. Collectively, these dependent claims constitute well-understood, routine, and conventional activities performed by generic computer components and therefore fail to integrate the abstract financial concept into a practical application.it is recited at a high level of generality and does not integrate the judicial exception into a practical application.
The Examiner has therefore determined that no additional element, or combination of additional claims elements is/are sufficient to ensure the claim(s) amount to significantly more than the abstract idea identified above (Step 2B: NO).
Therefore, claims 2-21 are not eligible subject matter under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status:
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. 20160275600 (“Adeyoola”) in view U.S. Pub. 20140365272 (“Hurewitz”) in view U.S. Pub. 20190043493 (“Mohajer”).
As per claims 2 and 15, Adeyoola discloses, causing to be captured, using a computing device, visual content data depicting a subject (Examiner interprets that acquiring user image data (e.g., a full-length photograph of the user) via a computing device (e.g., mobile phone) for further processing into a virtual body model, which constitutes “causing to be captured … visual content data depicting a subject.”) (“The method may be one in which a user takes, or has taken for them, a single full length photograph of themselves which is then processed by a computer system that presents a virtual body model based on that photograph, together with markers whose position the user can adjust, the markers corresponding to some or all of the following: top of the head, bottom of heels, crotch height, width of waist, width of hips, width of chest. The method may be one where the user enters height, weight and, optionally, bra size. The method may be one comprising a computer system which then generates an accurate 3D virtual body model and displays that 3D virtual body model on screen. The method may be one where a computer system is a back-end server.”) (0377-0378);
receiving, from the computing device, a selection of a first product (Examiner interprets user selection of a garment/product from an on-screen library, which corresponds to receiving a selection of a first product from the computing device) (“user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user”) (0376-0378);
generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid (Examiner interprets automatically generating and displaying an image of a selected garment combined onto the user’s virtual body model. The Examiner interprets “combined onto” and “displayed” as modified content depicting the subject with an overlaid representation of the product, i.e., the claimed first interactive visualization) (“user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user” and “virtual body model includes an image of a user's face, obtained from a digital photograph provided by the user. The method may be one in which the lighting conditions include one or more of: main direction of the lighting; colour balance of the lighting; colour temperature of the lighting; diffusiveness of the lighting. The method may be one in which the type of garment determines simulated weather conditions, such as rain, sunshine, snow, which are then applied to the image of the garment when combined with the virtual body model. The method may be one in which a user can manually select images from the database by operating a control that mimics the effect of changing the lighting conditions in which a garment was photographed. The method may be one in which the image processing system can automatically detect parameters of the lighting conditions applying to the digital face photograph supplied by the user can select matching garment images from the database”) (0376-0378 and 0263-0266, 0955);
transmitting, to the computing device, the first interactive visualization for display at an interface of the computing device (Examiner interprets that the body model and combined garment visualization are generated by a computer system (including a back-end server) and displayed to the user on a screen/device interface i.e. server-to-device delivery and on-screen presentation as “transmitting … for display at an interface of the computing device.”) (“method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model … virtual fitting room is possible to view and interact with from different types of platforms, where the user will get the virtual fitting room experience from any device used. Using a multi channel approach aids the different core features of the different devices and also the different types of features that you would like to use when you are using a mobile phone for instance”) (0955-0958, also refer to 0376-0378 and 0262-0266);
receiving, from the computing device, an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product (Examiner interprets user interaction with the virtual fitting room interface to browse, select, and replace garments (and outfits) displayed on the body model. The Examiner interprets such interaction as changing a product attribute and/or product configuration presented in the visualization (e.g., garment selection/variant displayed), thereby satisfying the limitation of an interaction that changes a product attribute) (“user can select what type of garment they would like to flick through to try out on the body model and also select one of the garments to stay on the body model. The user can also select to flick through whole outfits of garments to be tried on to the body model … browsing tool allows the user to flick vertically to change the set of garments to flick through. It can be understood as several different horizontal rows of garments that the user can alter between … user can select to continue to flick through alternative garments to replace the selected garment or the user can select to flick through alternative garments to be worn together with the selected garment …”) (0394-0396);
identifying a second product based on the one or more preferences towards the first product and the changed product attribute (Examiner interprets providing recommendations via the virtual fitting room to users based on user-related factors (e.g., body model/body type fit, retailer/business rules). And these recommendation features as identifying a second product for presentation to the user) (“Any recommendation from a retailer may be influenced by the business goals with wanting to ship high margin garments or garments that they have a big stock of … Recommendations can be given in relation to the natural fit, which is the fit of garments for a specific body type. The recommendations can be given via the virtual fitting room to the users to use for their body models. The recommendations can in an example only be shown to users with the matching body size. The recommendation can alternatively be shown as an aspirational recommendation where users receive them to have something to strive towards” and “user can also get specific targeted adverts based on the garments they are viewing and be provided with that information from the retailer or third parties. The adverts can also be linked to location and can for instance be that the garment you are viewing is 10% off if you enter this close-by shop within 30 minutes”) (0389-0392, and 0398, 0391-0396);
generating, based on the first interactive visualization and the second product, a second interactive visualization (Examiner interprets iteratively selecting alternative garments/outfits to be tried on and displayed on the body model, which the Examiner interprets as generating a second interactive visualization with a different (second) product overlaid on the subject/body model) (“user can select what type of garment they would like to flick through to try out on the body model and also select one of the garments to stay on the body model. The user can also select to flick through whole outfits of garments to be tried on to the body model … browsing tool allows the user to flick vertically to change the set of garments to flick through. It can be understood as several different horizontal rows of garments that the user can alter between … user can select to continue to flick through alternative garments to replace the selected garment or the user can select to flick through alternative garments to be worn together with the selected garment …”) (0394-0396), wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid (“method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model”) (0955-0956), 0264-0266);
and transmitting, to the computing device for display at the interface, the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject (Examiner interprets displaying combined garment images on the virtual body model across devices/platforms such that the garment image is shown as combined onto (overlaid on) the body model, i.e. displaying the representation of the second product at least partially over the subject) (“generating and sharing a virtual body model of a person combined with an image of a garment, in which the virtual body model is generated by analysing and processing one or more photographs of a user, and a garment image is generated by analysing and processing one or more photographs of the garment; and in which the virtual body model is accessible or use-able by multiple different applications or multiple different web sites, such that images of the garment can, using any of these different applications or web sites, be seen as combined onto the virtual body model to enable visualization of what the garment will look like when worn … method may be one in which one or more of the different applications or web sites each displays, in association with the image of the garment, a single icon or button which, when selected, automatically causes one or more images of the garment to be combined onto the virtual body model to enable a person to visualize what the garment will look like when worn by them. The method may be one in which the combined image is a 3D photo-real image which the user can rotate and/or zoom. The method may be one in which one of the different web sites is a garment retail web site and the garment is available for purchase from that web site”) (0264-0266, 0377-0378, 0955-0958).
Adeyoola specifically doesn’t discloses, receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device, however Hurewitz discloses, receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device (Examiner interprets capturing sensor-based bio-feedback (e.g., motion, facial expression, gaze) contemporaneous with user interaction with a particular feature/portion of the interactive visualization, which the Examiner interprets as bio-feedback “associated with the changed product attribute.”) (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087);
based at least in part on the bio-feedback data, determining one or more preferences toward the first product and the changed product attribute (Examiner notes that the underlined limitation is disclosed by another reference. Examiner interprets determining a user reaction/emotional response to a specific feature/portion of the product being interacted with based on sensor data (bio-feedback)) (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device, as taught by Hurewitz for the purpose to capture user reactions during attribute/visualization interactions to improve preference detection, personalization, and product design feedback.
Adeyoola specifically doesn’t discloses, determining one or more preferences toward the first product and the changed product attribute, however Mohajer discloses, determining one or more preferences toward the first product and the changed product attribute (“machine-automated shoe store, if a shopper says, “I like these blue suede shoes”, the item database will store, in association with the particular pair of shoes, a value “blue” for a color attribute and a value “suede” for a material attribute. Knowing shopper's preference for blue color and suede material shoes, the store will proceed to show the shopper other shoes from the item database that are suede and other shoes that are blue”) (0062).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, determining one or more preferences toward the first product and the changed product attribute, as taught by Mohajer for the purpose on how to formalize and store user preferences for particular attributes (e.g., color/material) and use those preferences to identify additional items.
As per claims 3, 10, and 16, Adeyoola discloses, identifying an area corresponding to the subject at which the representation of the first product is overlaid (Examiner interprets that the system combines/dresses the garment onto the virtual body model, the overlay necessarily occurs at a corresponding region/area of the subject (e.g., torso, legs, head), and thus Adeyoola inherently teaches identifying an area of the subject at which the product representation is overlaid) (“a user has a virtual body model of themselves, the method includes the steps of (a) the user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user”) (0376, 0396).
Adeyoola specifically doesn’t discloses, identifying, based on the bio-feedback data, a movement pattern comprising a movement at the area and wherein the determining the one or more preferences is based at least in part on the movement pattern at the area, however Hurewitz discloses, identifying, based on the bio-feedback data, a movement pattern comprising a movement at the area (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087);
and wherein the determining the one or more preferences is based at least in part on the movement pattern at the area (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, identifying, based on the bio-feedback data, a movement pattern comprising a movement at the area and wherein the determining the one or more preferences is based at least in part on the movement pattern at the area, as taught by Hurewitz for the purpose to capture user reactions during attribute/visualization interactions to improve preference detection, personalization, and product design feedback.
As per claims 4, 11, and 17, Adeyoola specifically doesn’t discloses, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, and wherein the movement pattern comprises a facial expression corresponding to the facial area, however Hurewitz discloses, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, and wherein the movement pattern comprises a facial expression corresponding to the facial area (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “Facial expression revealing a customer's emotions could also be detected by a video camera and associated with the part of the image that the customer was interacting with. Both facial expression and joint movement could be analyzed together to verify that the interpretation of the customer emotion is accurate”) (0076 and 0079, 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, and wherein the movement pattern comprises a facial expression corresponding to the facial area, as taught by Hurewitz for the purpose to analyze customer emotional reactions to specific products and provide this information to manufacturers to enhance future product designs.
As per claims 5, 12, and 18, Adeyoola specifically doesn’t discloses, wherein the facial area comprises a lip area corresponding to the subject, and wherein the facial expression comprises a lip movement at the lip area, however Hurewitz discloses, wherein the facial area comprises a lip area corresponding to the subject, and wherein the facial expression comprises a lip movement at the lip area (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “Facial expression revealing a customer's emotions could also be detected by a video camera and associated with the part of the image that the customer was interacting with. Both facial expression and joint movement could be analyzed together to verify that the interpretation of the customer emotion is accurate”) (0076 and 0079, 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, wherein the facial area comprises a lip area corresponding to the subject, and wherein the facial expression comprises a lip movement at the lip area, as taught by Hurewitz for the purpose to capture user reactions during attribute/visualization interactions to improve preference detection, personalization, and product design feedback.
As per claims 6, 13, and 19, Adeyoola specifically doesn’t discloses, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, wherein the movement pattern comprises an eye movement indicative of a gaze directed at the facial area, however Hurewitz discloses, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, wherein the movement pattern comprises an eye movement indicative of a gaze directed at the facial area, the method further comprising (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “Facial expression revealing a customer's emotions could also be detected by a video camera and associated with the part of the image that the customer was interacting with. Both facial expression and joint movement could be analyzed together to verify that the interpretation of the customer emotion is accurate”) (0076 and 0079, 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, wherein the area corresponding to the subject at which the first product is overlaid comprises a facial area, wherein the movement pattern comprises an eye movement indicative of a gaze directed at the facial area, as taught by Hurewitz for the purpose to capture user reactions during attribute/visualization interactions to improve preference detection, personalization, and product design feedback.
Adeyoola specifically doesn’t discloses, storing an association between the gaze and the first product; and wherein the determining the one or more preferences is based at least in part on the association between the gaze and the first product, however Mohajer discloses, storing an association between the gaze and the first product (“user 11 makes a natural language expression that includes information about a value of an attribute of a particular item and a preference for the attribute. Parser 23 extracts all the information. The attribute value associated with the item is stored in an item database 25. The value of the preference attribute is stored in a user database 26. A recommendation engine 28 uses the user preference information from the user database 26 and information about items in the item database 25 to produce recommendations for the user 11” and “the parser 134 outputs environmental information if it is detected in user expressions. For example, if a user says, “today is so hot”, the system stores value “hot” for a “weather temperature” parameter in storage 139. Recommendation engine 118 uses the “weather temperature” parameter value to select a specific layer of user preference values. For example, a system determines that whenever the “weather temperature” environmental parameter is “hot”, the user expresses preferences for drink items with a “drink temperature” attribute value of “cold”, but when the “weather temperature” parameter has value “cold”, the user expresses preferences for drink items with “drink temperature” attribute “hot”. When recommendation engine 118 produces a drink recommendation, it chooses items from item database”) (0062 and 0103);
and wherein the determining the one or more preferences is based at least in part on the association between the gaze and the first product (“user 11 makes a natural language expression that includes information about a value of an attribute of a particular item and a preference for the attribute. Parser 23 extracts all the information. The attribute value associated with the item is stored in an item database 25. The value of the preference attribute is stored in a user database 26. A recommendation engine 28 uses the user preference information from the user database 26 and information about items in the item database 25 to produce recommendations for the user 11” and “the parser 134 outputs environmental information if it is detected in user expressions. For example, if a user says, “today is so hot”, the system stores value “hot” for a “weather temperature” parameter in storage 139. Recommendation engine 118 uses the “weather temperature” parameter value to select a specific layer of user preference values. For example, a system determines that whenever the “weather temperature” environmental parameter is “hot”, the user expresses preferences for drink items with a “drink temperature” attribute value of “cold”, but when the “weather temperature” parameter has value “cold”, the user expresses preferences for drink items with “drink temperature” attribute “hot”. When recommendation engine 118 produces a drink recommendation, it chooses items from item database”) (0062 and 0103).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, storing an association between the gaze and the first product; and wherein the determining the one or more preferences is based at least in part on the association between the gaze and the first product, as taught by Mohajer for the purpose on how to formalize and store user preferences for particular attributes (e.g., color/material) and use those preferences to identify additional items.
As per claims 7 and 20, Adeyoola discloses, wherein the visual content data comprises image data depicting the subject, and wherein the modified content comprises modified image data depicting the subject at which the representation of the first product is overlaid (“method may be one in which the virtual body model is generated using one or more of the following: width of chest, height of crotch etc. The method may be one in which a user takes, or has taken for them, a single full length photograph of themselves which is then processed by a computer system that presents a virtual body model based on that photograph, together with markers whose position the user can adjust, the markers corresponding to some or all of the following: top of the head, bottom of heels, crotch height, width of waist, width of hips, width of chest. The method may be one where the user enters height, weight and, optionally, bra size. The method may be one comprising a computer system which then generates an accurate 3D virtual body model and displays that 3D virtual body model on screen. The method may be one where a computer system is a back-end server” and “user selects to dress the body model in a raincoat, the visualization of the look will include effects of rain. This can be a rainy background or for instance a layer of rain overlaid on the image. Similarly, if the user selects to dress the body model in a pair of swim shorts or a bikini, the background is changed to display a beach and the lighting is changed to simulate sunlight”) (0377-0378 and 0349).
As per claims 8 and 21, Adeyoola discloses, identifying a product type corresponding to the first product (“a user has a virtual body model of themselves, the method includes the steps of (a) the user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user …” and “automatically generating hairstyle recommendations, in which a system receives as input one or more photographs of a user's face and then (a) analyses that face for facial geometry and (b) matches the facial geometry to a library of hairstyles, each hairstyle being previously indexed as suitable for one or more facial geometries, and (c) selects one or more optimally matching hairstyles and (d) outputs an image of that optimally matched hairstyle to the user”) (0376-0378 and 0379);
determining, based at least in part on the product type, that the first product modifies a facial area associated with a human body (“user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user” and “method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model) (0376-0378 and 0955);
identifying, in the image data, a facial area corresponding to the subject (“system receives as input one or more photographs of a user's face and then (a) analyses that face for facial geometry and (b) matches the facial geometry to a library of hairstyles, each hairstyle being previously indexed as suitable for one or more facial geometries, and (c) selects one or more optimally matching hairstyles and (d) outputs an image of that optimally matched hairstyle to the user” and “face is being changed to represent the common features for a person of the relevant age”) (0379 and 0625);
and wherein the modified image data depicts the subject having a modified facial area at which the representation of the first product is overlaid (“FIG. 17; a drawing of the photo positions is shown in FIG. 18. The user can in one example as a short cut click on “back view” to show the backside view of the body model when showing a look. The user can zoom the body model image and see the body model and the garment in closer detail as shown for example in FIG. 24. The user can change the head and the measurements of the body model from the store interface, for instance if the user would like to change to a different hairstyle” and “producing these different layers, and hence preserving a digital representation of the garment fabric in occluded regions, is to allow fitting of the garment to different body models. When the garment is stretched or contracted by different amounts in different regions to agree with a user's body model, the different layers will stretch or contract different amounts and so slide past each other. If a depiction of the occluded garment texture wasn't preserved in newly non-occluded regions then this sliding would uncover regions of the body with no fabric covering them. This would be an undesirable result”) (0317 and 0758-0759).
As per claims 9, Adeyoola discloses, capturing, at a computing device, visual content data depicting a subject (Examiner notes that acquiring user image data (e.g., a full-length photograph of the user) via a computing device (e.g., mobile phone) for further processing into a virtual body model, which constitutes “causing to be captured … visual content data depicting a subject.”) (“The method may be one in which a user takes, or has taken for them, a single full length photograph of themselves which is then processed by a computer system that presents a virtual body model based on that photograph, together with markers whose position the user can adjust, the markers corresponding to some or all of the following: top of the head, bottom of heels, crotch height, width of waist, width of hips, width of chest. The method may be one where the user enters height, weight and, optionally, bra size. The method may be one comprising a computer system which then generates an accurate 3D virtual body model and displays that 3D virtual body model on screen. The method may be one where a computer system is a back-end server.”) (0377-0378);
detecting, at the computing device, a selection of a first product (“user can actively select that an image is to be passed on to the Image Identification engine for garment detection. The user can also in one example indicate on a specific image what portion of the image includes the garment which is to be identified. This could be done for instance by indicating with a click in the middle of the garment or for instance by indicating the perimeters of the garment in the image to assist the Image Identification Engine”) (0973, Figs. 21A-B);
generating, for display at an interface of the computing device and based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid (Examiner notes that in SCF system, the payment obligation is the buyer’s request for financing, enabling the supplier to raise funds based on the buyer’s approval of the invoice to pay earlier) (“user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user” and “method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model) (0376-0378 and 0955);
detecting, at the computing device, an interaction with the first interactive visualization at the interface that changes a product attribute of the first product (“user can select what type of garment they would like to flick through to try out on the body model and also select one of the garments to stay on the body model. The user can also select to flick through whole outfits of garments to be tried on to the body model … browsing tool allows the user to flick vertically to change the set of garments to flick through. It can be understood as several different horizontal rows of garments that the user can alter between … user can select to continue to flick through alternative garments to replace the selected garment or the user can select to flick through alternative garments to be worn together with the selected garment …”) (0394-0396);
identifying, at the computing device, a second product based on the one or more preferences towards the first product and the changed product attribute (“user can also get specific targeted adverts based on the garments they are viewing and be provided with that information from the retailer or third parties. The adverts can also be linked to location and can for instance be that the garment you are viewing is 10% off if you enter this close-by shop within 30 minutes”) (0398, 0391-0396);
generating, for display at the interface, a second interactive visualization (“user can select what type of garment they would like to flick through to try out on the body model and also select one of the garments to stay on the body model. The user can also select to flick through whole outfits of garments to be tried on to the body model … browsing tool allows the user to flick vertically to change the set of garments to flick through. It can be understood as several different horizontal rows of garments that the user can alter between … user can select to continue to flick through alternative garments to replace the selected garment or the user can select to flick through alternative garments to be worn together with the selected garment …”) (0394-0396), comprising modified content depicting the subject at which a representation of the second product is overlaid (“method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model”) (0955-0956), wherein the representation of the second product is displayed at least partially over the subject (“method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model … virtual fitting room is possible to view and interact with from different types of platforms, where the user will get the virtual fitting room experience from any device used. Using a multi channel approach aids the different core features of the different devices and also the different types of features that you would like to use when you are using a mobile phone for instance, in contrast to sitting at your desk by your computer”) (0955-0958).
Adeyoola specifically doesn’t discloses, capturing, at the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device, however Hurewitz discloses, capturing, at the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087);
based at least in part on the bio-feedback data, determining, at the computing device, one or more preferences toward the first product and the changed product attribute (Examiner notes that the underlined limitation is taught by another reference) (“determine the customer's emotional response to a particular part of the image that the customer is interacting with. Motion sensors or video cameras may record a customer's skeletal joint movement or facial expressions, and use that information to extrapolate how the customer felt about the particular feature of the product. The sensors may detect anatomical parameters such as a customer's gaze, posture, facial expression, skeletal joint movements, and relative body position. This information can be provided to a product manufacturer as aggregated information. The manufacturer may use the emotion information to design future products” and “The gesture data captured in step 1260 is associated with the specific portion of the 3D image that the customer 135 was interacting with when exhibiting the emotional response. For example, the customer 135 may have interacted with a particular 3D image animation simulating a door opening, turning knobs, opening drawers, placing virtual objects inside of the 3D image, etc. These actions are combined with the emotional response of the customer 135 at the time. In this way it can be determined how a customer 135 felt about a particular feature of a product”) (0076 and 0085-0087).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, receiving, from the computing device, bio-feedback data associated with the changed product attribute, wherein the bio-feedback data is captured via at least one sensor communicatively coupled to the computing device, as taught by Hurewitz for the purpose to capture user reactions during attribute/visualization interactions to improve preference detection, personalization, and product design feedback.
Adeyoola specifically doesn’t discloses, determining, at the computing device, one or more preferences toward the first product and the changed product attribute, however Mohajer discloses, determining, at the computing device, one or more preferences toward the first product and the changed product attribute (“machine-automated shoe store, if a shopper says, “I like these blue suede shoes”, the item database will store, in association with the particular pair of shoes, a value “blue” for a color attribute and a value “suede” for a material attribute. Knowing shopper's preference for blue color and suede material shoes, the store will proceed to show the shopper other shoes from the item database that are suede and other shoes that are blue”) (0062).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention for captured visual content data depicting a subject; receiving a selection of a first product, generating, based at least in part on the visual content data and the selection, a first interactive visualization comprising modified content depicting the subject at which a representation of the first product is overlaid, transmitting the first interactive visualization for display at an interface of the computing device, receiving an interaction with the first interactive visualization at the interface of the computing device that changes a product attribute of the first product, generating, based on the first interactive visualization and the second product, a second interactive visualization, wherein the second interactive visualization comprises modified content depicting the subject at which a representation of the second product is overlaid, and transmitting the second interactive visualization, wherein the representation of the second product is displayed at least partially over the subject, as disclosed by Adeyoola, determining one or more preferences toward the first product and the changed product attribute, as taught by Mohajer for the purpose on how to formalize and store user preferences for particular attributes (e.g., color/material) and use those preferences to identify additional items.
As per claims 14, Adeyoola discloses, wherein the visual content data comprises image data depicting the subject, and wherein the modified content comprises modified image data depicting the subject at which the representation of the first product is overlaid, the method further comprising (“method may be one in which the virtual body model is generated using one or more of the following: width of chest, height of crotch etc. The method may be one in which a user takes, or has taken for them, a single full length photograph of themselves which is then processed by a computer system that presents a virtual body model based on that photograph, together with markers whose position the user can adjust, the markers corresponding to some or all of the following: top of the head, bottom of heels, crotch height, width of waist, width of hips, width of chest. The method may be one where the user enters height, weight and, optionally, bra size. The method may be one comprising a computer system which then generates an accurate 3D virtual body model and displays that 3D virtual body model on screen. The method may be one where a computer system is a back-end server” and “user selects to dress the body model in a raincoat, the visualization of the look will include effects of rain. This can be a rainy background or for instance a layer of rain overlaid on the image. Similarly, if the user selects to dress the body model in a pair of swim shorts or a bikini, the background is changed to display a beach and the lighting is changed to simulate sunlight”) (0377-0378 and 0349):
identifying a product type corresponding to the first product (“a user has a virtual body model of themselves, the method includes the steps of (a) the user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user …” and “automatically generating hairstyle recommendations, in which a system receives as input one or more photographs of a user's face and then (a) analyses that face for facial geometry and (b) matches the facial geometry to a library of hairstyles, each hairstyle being previously indexed as suitable for one or more facial geometries, and (c) selects one or more optimally matching hairstyles and (d) outputs an image of that optimally matched hairstyle to the user”) (0376-0378 and 0379);
determining, based at least in part on the product type, that the first product modifies a facial area associated with a human body (“user selecting a garment from an on-screen library of virtual garments; (b) a processing system automatically generating an image of the garment combined onto the virtual body model, the garment being sized automatically to be a correct fit; (c) the processing system generating data defining how a physical version of that garment would be sized to provide that correct fit; (d) the system providing that data to a garment manufacturer to enable the manufacturer to make a garment that fits the user” and “method of generating photo-realistic images of a garment combined onto a virtual body model, in which (a) a user locates a garment on a website; (b) a computer implemented system analyses the image of that garment from the website and then searches and identifies that garment in a database of previously analysed garments and then combines one or more virtual images of the garment from its database onto a virtual body model of the user and then displays to the user that combined garment and virtual body model) (0376-0378 and 0955);
identifying, in the image data, a facial area corresponding to the subject (“system receives as input one or more photographs of a user's face and then (a) analyses that face for facial geometry and (b) matches the facial geometry to a library of hairstyles, each hairstyle being previously indexed as suitable for one or more facial geometries, and (c) selects one or more optimally matching hairstyles and (d) outputs an image of that optimally matched hairstyle to the user” and “face is being changed to represent the common features for a person of the relevant age”) (0379 and 0625);
and wherein the modified image data depicts the subject having a modified facial area at which the representation of the first product is overlaid (“FIG. 17; a drawing of the photo positions is shown in FIG. 18. The user can in one example as a short cut click on “back view” to show the backside view of the body model when showing a look. The user can zoom the body model image and see the body model and the garment in closer detail as shown for example in FIG. 24. The user can change the head and the measurements of the body model from the store interface, for instance if the user would like to change to a different hairstyle” and “producing these different layers, and hence preserving a digital representation of the garment fabric in occluded regions, is to allow fitting of the garment to different body models. When the garment is stretched or contracted by different amounts in different regions to agree with a user's body model, the different layers will stretch or contract different amounts and so slide past each other. If a depiction of the occluded garment texture wasn't preserved in newly non-occluded regions then this sliding would uncover regions of the body with no fabric covering them. This would be an undesirable result”) (0317 and 0758-0759).
Conclusion
25. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US. Pat. 20130066750 (“Siddique”).
Siddique discloses, a methods and systems are related to an online apparel modeling system that allows users to have three-dimensional models of their physical profile created. Users may purchase various goods and/or services and collaborate with other users in the online environment.
26. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM UBALE whose telephone number is (571)272-9861. The examiner can normally be reached Mon-Fri. 7:00 AM- 6:30 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM UBALE/
Primary Examiner, Art Unit 3689