DETAILED ACTION
Status of Claims
Claims 1-3, 5, 6, and 8-23 submitted on 01/06/2026 are pending and have been examined. Claims 1, 6, 22 and 23 have been amended. Claims 4 and 7 have been cancelled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been filed in parent application No. AU2021902385, filed on 08/02/2021.
Acknowledgment is made of applicant's claim for a 371 of an international application in application No. PCT/AU2022/050824, filed on 08/02/2022.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is: an “image capture facility operable to obtain…” and a “searching facility configured to identify…” in claim 1. The limitations use the nonce terms “image capture facility” and “searching facility” which are modified by functional language, i.e., “to obtain” and “to identify…,” and are not modified by sufficient structure for performing the obtaining nor identifying. The corresponding structures are found in ¶0058, ¶0090, Fig. 3, and Fig. 5 of the instant specification. Dependent claims 2-3, 5, 6, and 8-21 inherit the interpretation of claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5, 6, and 8-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Step 1
Claims 1-3, 5, 6, and 8-21 are directed to a machine, claim 22 is directed to a process, and claim 23 is directed to an article of manufacture (see MPEP 2106.03).
Step 2A, Prong 1
Claim 1, taken as representative, recites at least the following limitations that recite an abstract idea:
facilitating the purchase of wearable items in an environment, including:
obtain multiple optical images of a body part of an individual along with an object of known dimensions that is attached to, or located in proximity with, the body part, and resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images with the object of known dimensions to further provide relative and/or absolute sizing information regarding the three-dimensional model of the body part,
wherein obtaining the multiple optical images of the body part includes:
utilizing, including passing over the body part; and
providing guidance regarding adequate capture of body part images, and
audible and/or visual prompts to guide the individual when capturing the multiple optical images including when they have attained sufficient images to enable generation of the three-dimensional model of the body part with sufficient data to determine the dimensions of features in the model;
receive input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the wearable items;
store the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model;
identify one or more wearable items of interest in the range of items offered for purchase that include detailed dimensions similar to the physical dimensions of the body part as determined from the sizing information provided by the three-dimensional model, according to a similarity threshold;
generate and provide a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part.
The above limitation, under its broadest reasonable interpretation, falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II), in that it recites a commercial interaction, see ¶0001. Claims 22 and 23 recites similar limitations as claim 1.
Thus, under Prong 1 of Step 2A, claims 1, 22, and 23 recite an abstract idea.
Step 2A, Prong 2
Claim 1 includes the following additional elements that are bolded:
a computer-implemented system for facilitating the purchase of wearable items in an online environment, the system including:
an image capture facility operable to obtain multiple optical images of a body part of an individual along with an object of known dimensions that is attached to, or located in proximity with, the body part and thereby also in view of the image capture facility, and resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images with the object of known dimensions to further provide relative and/or absolute sizing information regarding the three-dimensional model of the body part,
wherein obtaining the multiple optical images of the body part includes:
utilizing an optical hardware component associated with a mobile data communications device, including passing an optical lens associated with the optical hardware component over the body part; and
providing guidance regarding adequate capture of body part images, and
audible and/or visual prompts to guide the individual when capturing the multiple optical images including when they have attained sufficient images to enable generation of the three-dimensional model of the body part with sufficient data to determine the dimensions of features in the model;
one or more processors operable to receive input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the wearable items;
one or more databases in communication with the one or more processors, the one or more databases configured to store the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model;
a searching facility configured to identify one or more wearable items of interest in the range of items offered for purchase that include detailed dimensions similar to the physical dimensions of the body part as determined from the sizing information provided by the three-dimensional model, according to a similarity threshold;
the one or more processors further operable to:
generate and provide a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part.
The additional elements recited in claims 1, 22, and 23 merely invoke such elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment of computers (see MPEP 2106.05(f) and MPEP 2106.05(h). These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration (see ¶¶0073-0078).
As such, under Prong 2 of Step 2A, when considered both individually and as a whole, the additional elements do not integrate the judicial exception into a practical application and, thus, claims 1, 22, and 23 are directed to an abstract idea.
Step 2B
As noted above, while the recitation of the additional elements in independent claims 1, 22, and 23 are acknowledged, claims 1, 22, and 23 merely invoke such additional elements as a tool to perform the abstract idea and generally link the use of the abstract idea to a particular technological environment (see MPEP 2106.05(f) and MPEP 2106.05(h)).
Even when considered as an ordered combination, the additional elements of claim 1, 22, and 23 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 22, and 23 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claims 1, 22, and 23 are ineligible.
Dependent claims 3, 6, and 8-17 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. More specifically, dependent claims 3, 6, and 8-17 merely further define the abstract limitations of claims 1, 22, and 23 or provide further embellishments of the limitations recited in independent claims 1, 22, and 23. Claims 3, 6, and 8-17 do not introduce any further additional elements. Thus, dependent claims 3, 6, and 8-17 are ineligible.
Furthermore, it is noted that certain dependent claims recite additional elements supplemental to those recited in independent claims 1, 22, and 23: a software application operable on a data communications device (claim 2), utilizing fixed body image capture hardware (claim 5), user interfaces (claim 18), web browser (claim 18), online store (claims 18, 19, and 21), computer systems (claim 20), and an application programming interface (API) (claim 21). However, these elements do not integrate the abstract idea into a practical application because they merely amount to using a computer to apply the abstract idea to a particular technological environment or field of use and thus do not act to integrate the abstract idea into a practical application of the abstract idea. Additionally, the additional elements do not amount to significantly more because they merely amount to using a computer to apply the abstract idea and amount to no more than a general link of the use of the abstract idea to a particular technological environment.
Thus, dependent claims 2, 5, and 18-21 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5, 6, 9, 10, 12-15, 17-19, 22, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani et al. (US 10,776,861 B1 [previously cited]) in view of Lawrence et al. (US 2011/0022965 A1 [previously cited]) in view of Blanchflower et al. (US 2016/0163098 A1 [previously cited]).
Regarding Claim 1, Haitani et al., hereinafter, Haitani, discloses a computer-implemented system for facilitating the purchase of wearable items in an online environment, the system including (Fig. 1G-H; Col. 5, line 51 to Col. 6, line 22[As is shown in FIG. 1G, the marketplace server 112 may identify a plurality of items 130A, 130B (e.g., polo shirts) in response to the entry of the keyword at the network page 116-1, and may display a network page 116-2 including information or data regarding such items 130A, 130B (e.g., names, images, prices, details, customer ratings and/or any number of other interactive features). After the customer 170 selects the item 130A shown on the network page 116-2, a network page 116-3 including a plurality of details regarding the item 130A is displayed, as is shown in FIG. 1H. The network page 116-3 further depicts a visual representation 140 of the item 130A on an avatar 122 of the customer 170 that is derived based on the point cloud 120.]):
an image capture facility operable to obtain multiple optical images of a body part of an individual (Figs. 1A[showing the image capture facility] and 3A; Col. 26, line 58 to Col. 27, line 11[At box 320, a customer positions himself or herself within a field of view of an imaging device. For example, referring again to FIG. 1A, an imaging device such as the depth sensing camera 180 may be mounted within the customer's home, or within a bricks-and-mortar retail establishment or other publicly accessible location, and aligned to enable a customer to freely stand in any pose or execute any gesture within its field of view. At box 325, the imaging device captures a plurality of depth images of the customer in a number of orientations.]), and
resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images to further provide relative and/or absolute sizing information regarding the three-dimensional model of the body part (Fig. 4; Col. 28, line 60 to Col. 29, line 5[a single depth image, e.g., any one of the depth images 420A, 420B, 420C, 420D, 420E, 420F, 420G, 420H, or a single visual image may be used to generate an avatar or other three-dimensional model of a customer. For example, in some embodiments, where a single depth image or a single visual image is captured or otherwise obtained from the customer (e.g., uploaded by the customer), one or more dimensions or other physical properties of the customer may be determined based on the single depth image or the single visual image and used to select a corresponding virtual mannequin or a virtual body template of a customer from which an avatar or three-dimensional model may be generated.] in view of Col. 18, lines 12-25[The body templates 224 may be any collections of information or data regarding one or more bodies having standard sizes, shapes or dimensions, any of which may be selected as a basis for generating an avatar 222 or other three-dimensional model, based on a comparison or proximity to surface data 220 of a customer. Each of the body templates 224 may include unique data points corresponding to lengths, circumferences, diameters or thicknesses of heads, necks, shoulders, backs, arms, waists, hips, seats, legs or feet. One of the body templates 224 may be selected based on a comparison of the respective data points to position data (e.g., skeletal data) of a customer]),
wherein obtaining the multiple optical images of the body part includes: utilizing an optical hardware component associated with a mobile data communications device, including passing an optical lens associated with the optical hardware component over the body part (Col. 7, lines 18-44[Those of ordinary skill in the pertinent arts will recognize that imaging data, e.g., visual imaging data, depth imaging data, infrared imaging data, or imaging data of any other type or form, may be captured using one or more imaging devices such as digital cameras, depth sensors, range cameras, infrared cameras or radiographic cameras… Such data files may also be printed, displayed on one or more broadcast or closed-circuit television networks, or transmitted over a computer network as the Internet]; see ¶0013 of the instant specification where a smartphone camera being capable of capturing images of the body of an individual is stated as being an example of “passing an optical lens over the body part”); and
providing adequate capture of body part images, and audible and/or visual prompts to the individual when capturing the multiple optical images to enable generation of the three-dimensional model of the body part to determine the dimensions of features in the model (Fig. 1A and 4; Col. 25, lines 1-7[The computers, servers, devices and the like described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to provide any of the functions or services described herein and/or achieve the results described herein.]);
one or more processors operable to receive input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the wearable items (Figs. 1A, 1F and 1G; Col. 5, line 23 to Col. 6, line 47[the customer 170 may access one or more network pages 116-1 hosted by the marketplace server 112 or otherwise associated with an electronic marketplace. The marketplace server 112 may have a variety of data regarding clothing… After the customer 170 selects the item 130A shown on the network page 116-2, a network page 116-3 including a plurality of details regarding the item 130A is displayed, as is shown in FIG. 1H… The systems and methods disclosed herein enable customers to virtually “try on” articles of clothing prior to purchasing them from an electronic marketplace, and to select from one or more models or sizes of clothing without having to set foot in a bricks-and-mortar retailer] in view of Col. 17, lines 16-31; see also ¶0004 of the instant specification where size and dimension are used interchangeably);
one or more databases in communication with the one or more processors, the one or more databases configured to store the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model (Fig. 2; Col. 17, lines 32-58[The data stores 214 may include any type of information or data regarding items that have been made available for sale through the marketplace 210, or ordered by customers, such as the customer 270, from the marketplace 210, or any information or data regarding customers. For example, as is shown in FIG. 2, the data store 214 includes surface data 220, one or more avatars 222, one or more body templates 224, clothing data 230, one or more customer profiles 232 and context data 234. The surface data 220 may include any information or data such as depth images (e.g., information relating to distances of surfaces of objects such as customers or clothing within a scene from a perspective of an imaging device), point clouds (e.g., a grouping of data points corresponding to external surfaces of customers derived from depth data), visual images (e.g., black-and-white, grayscale, or color images) or any other representation of data corresponding to surfaces of an object, with individual points in space having coordinates defining their respective locations in absolute terms or relative to an imaging system according to a standard coordinate system]; Examiner notes that data stores are comparable to databases);
a searching facility configured to identify one or more wearable items of interest in the range of items offered for purchase that include detailed dimensions similar to the physical dimensions of the body part as determined from the sizing information provided by the three-dimensional model, according to a similarity threshold (Fig. 1A[the marketplace is comparable to a searching facility]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]);
the one or more processors further operable to: generate and provide a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part (Fig. 1A[the marketplace is comparable to a searching facility], Figs. 1H, 8A, 8B and 10[showing the display of clothing on a customer’s avatar]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Although Haitani discloses capturing images of a user’s body parts and generating a 3D model of their body, Haitani does not explicitly disclose capturing images of body parts along with an object of known dimensions that is attached to, or located in proximity with, the body part and thereby also in view of the image capture facility and a comparison of the body part images with the object of known dimensions.
However, Lawrence et al., hereinafter, Lawrence, teaches capturing images of objects of interest along with an object of known dimensions nearby in order to generate a model through a comparison with the object of known dimensions (¶0026[For example, the electronic device can determine the ratio of lengths of the user's arm to the user's torso or legs. In some embodiments, the electronic device can determine an absolute or exact value for dimensions of the user's body by comparing the body lengths with known lengths in an image (e.g., relative to a known car length in the image, or other object in the image). The electronic device can use any suitable number of images to generate an avatar, including for example several images from different angles.]).
The system of Lawrence is applicable to the system of Haitani as they share characteristics and capabilities, namely, they are both targeted to generating avatars for a user to apply articles of clothing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as disclosed by Haitani to include using an object of known dimensions as reference to calculate dimensions as taught by Lawrence. One of ordinary skill in the art would have been motivated to expand the system of Haitani in order to define a personalized avatar providing a true representation of the user's body (¶0006).
Although Haitani discloses providing capture of body part images, audible or visual responses to the individual in order to generate 3D models of body parts, Haitani in view of Lawrence does not explicitly teach providing guidance regarding adequate capture of body part images, audible or visual prompts to guide the individual when capturing images including when they have attained sufficient images to enable generation of the model with sufficient data.
However, Blanchflower et al., hereinafter, Blanchflower, teaches providing guidance regarding sufficient and adequate data to generate 3D models (Fig. 1; ¶0012[The mobile computing device 110 includes an object capture guidance user interface 115 that helps to guide the user in capturing sufficient views of the target object 120 such that an accurate, well-defined three-dimensional model of the object may be created.]).
The system of Blanchflower is applicable to the system of Haitani in view of Lawrence as they share characteristics and capabilities, namely, they are all targeted to generating 3D models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence to include guiding the user to ensure capture of sufficient data as taught by Blanchflower. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in order to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object (Abstract).
Regarding Claim 2, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the searching facility is operable by an individual utilizing a software application operable on a data communications device (Col. 19, lines 9-20[The marketplace 210 may be associated with one or more fulfillment centers, warehouses or other storage or distribution facilities, any of which may be adapted to receive, store, process and/or distribute items. Such facilities may include any number of servers, data stores and/or processors for operating one or more order processing and/or communication systems and/or software applications having one or more user interfaces, or communicate with one or more other computing devices or machines that may be connected to the network 290, for transmitting or receiving information in the form of digital or analog data, or for any other purpose.]).
Regarding Claim 3, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the body part of the individual is a body part, a combination of body parts, a body region or regions, or an entire body of the individual (Fig. 4[showing an entire body of the individual]; Col. 28, line 60 to Col. 29, line 5[a single depth image, e.g., any one of the depth images 420A, 420B, 420C, 420D, 420E, 420F, 420G, 420H, or a single visual image may be used to generate an avatar or other three-dimensional model of a customer. For example, in some embodiments, where a single depth image or a single visual image is captured or otherwise obtained from the customer (e.g., uploaded by the customer), one or more dimensions or other physical properties of the customer may be determined based on the single depth image or the single visual image and used to select a corresponding virtual mannequin or a virtual body template of a customer from which an avatar or three-dimensional model may be generated.]).
Regarding Claim 5, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein obtaining optical images of the body part of the individual includes utilizing fixed body image capture hardware associated with a retailer which is configured to capture the multiple optical images (Fig. 1A; Col. 26, lines 59-65[For example, referring again to FIG. 1A, an imaging device such as the depth sensing camera 180 may be mounted within the customer's home, or within a bricks-and-mortar retail establishment or other publicly accessible location, and aligned to enable a customer to freely stand in any pose or execute any gesture within its field of view] in view of Col. 22, lines 37-40[For example, the RGBD camera 280 may be hard-mounted to a support or mounting that maintains the device in a fixed configuration or angle with respect to one, two or three axes.]).
Regarding Claim 6, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the optical images of the body part are captured from a range of different angles to enable generation of the three-dimensional model (Fig. 4[showing different angles]; Col. 8, line 59 to Col. 9, line 5[Many imaging devices also include manual or automatic features for modifying their respective fields of view or orientations… an imaging device may include one or more actuated or motorized features for adjusting a position of the imaging device, or for adjusting either the focal length (e.g., a zoom level of the imaging device) or the angular orientation (e.g., the roll angle, the pitch angle or the yaw angle), by causing a change in the distance between the sensor and the lens (e.g., optical zoom lenses or digital zoom lenses), a change in the location of the imaging device, or a change in one or more of the angles defining the angular orientation.]).
Regarding Claim 9, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the one or more processors are further operable to generate a display of the physical dimensions and/or the three-dimensional model of the body part (Figs. 13-15[showing the display of the 3D models]; Col. 11, line 38 to Col. 12, line 11[The three-dimensional models may be used for any purpose, including but not limited to displaying views of articles of clothing depicted thereon, thereby providing a more accurate representation of how a specific article of clothing will look or fit on a given customer, in any context, either alone or with one or more other articles of clothing.]).
Regarding Claim 10, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the generated display of the one or more wearable items of interest that include similar detailed dimensions as compared with the physical dimensions of the body part includes a scrollable listing of items (Figs. 1A and 1G; Col. 5, lines 37-45[Alternatively, the customer 170 may select or otherwise designate a category of items at a network page having icons, links or drop-down menus listing or representing such categories thereon. In some other embodiments, the customer 170 may enter the keyword or select a category at a page or interface provided by a dedicated application (e.g., a shopping application) operating on a computer device such as a smartphone or tablet computer]; Examiner notes that “drop-down menu” is comparable to a scrollable listing of items).
Regarding Claim 12, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the one or more processors are further operable to generate a prompt to the individual to provide an updated series of optical images of the body part for the purpose of ensuring that the stored physical dimensions of the body part are accurate and representative of current physical dimensions of the body part (Col. 12, lines 61-66[An avatar or other three-dimensional model may be updated over time based on imaging data or other information that is subsequently obtained and associated with the customer, such that the quality of the avatar or other three-dimensional model is progressively enhanced as the quality of the available imaging data improves.] and Col. 18, lines 5-11[Additionally, the avatars 222 or other three-dimensional models may be updated in real time, in near-real time, or at any time based on additional surface data 220 that may be obtained regarding a customer, such as the customer 270, or, alternatively, based on any predicted changes in the sizes, shapes or dimensions of such a customer.] in view of Col. 43, lines 7-14[with or without user input or prompting]).
Regarding Claim 13, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the searching facility enables the individual to filter the displayed one or more wearable items of interest (Fig. 1H; Col. 32, lines 33-40[In some embodiments, the article of clothing 830-1 may be selected on any basis, including but not limited to an entry of a keyword corresponding to the article of clothing 830-1 into a search box or other feature of an electronic marketplace, a selection of a hyperlinked element (e.g., text, images or other features) corresponding to the article of clothing 830-1, or a category of the article of clothing 830-1, or on any other basis.]), according to one or more of:
a name of the item, a category of the item, a brand name associated with the item, a location or geographical zone, a retailer of the item, a price of the item, a visual attribute of the item, or a physical attribute of the item (Fig. 1H; Col. 32, lines 33-40).
Regarding Claim 14, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the display of the one or more wearable items of interest is limited to only those items that include similar detailed dimensions as compared with the physical dimensions of the body part according to the similarity threshold, hence avoiding further requirement for the individual to review and filter results to ensure that the displayed wearable items represent a substantially correct and/or preferred fit (Figs. 1H, 8A, 16A[figures showing display of items that are of similar dimensions as the physical dimensions of user’s body]; Col. 32, lines 7-33[As is shown in FIG. 8A, a network page 816 associated with an electronic marketplace includes an image of an avatar 822 corresponding to a customer… the network page 816 specifies a recommended size of the article of clothing 830-1 for the customer, which may be identified based on the customer's known or determined sizes, shapes or dimensions, as well as the customer's purchasing history] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]). (Figs. 1H, 8A, 16A[figures showing display of items that are of similar dimensions as the physical dimensions of user’s body]; Col. 32, lines 7-33[As is shown in FIG. 8A, a network page 816 associated with an electronic marketplace includes an image of an avatar 822 corresponding to a customer… the network page 816 specifies a recommended size of the article of clothing 830-1 for the customer, which may be identified based on the customer's known or determined sizes, shapes or dimensions, as well as the customer's purchasing history] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Regarding Claim 15, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 14, Haitani further discloses wherein the similarity threshold is selected according to the particular body part and/or item under comparison (Col. 32, lines 7-33[As is shown in FIG. 8A, a network page 816 associated with an electronic marketplace includes an image of an avatar 822 corresponding to a customer… the network page 816 specifies a recommended size of the article of clothing 830-1 for the customer, which may be identified based on the customer's known or determined sizes, shapes or dimensions, as well as the customer's purchasing history] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Regarding Claim 17, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the wearable item is an item of clothing, footwear or headgear (Col. 38, lines 21-29[For example, the item may be a bathing suit, a bathrobe, a belt, a bracelet, a pair of glasses, a hat, a jacket, a necklace, a necktie, a pair of pants, a parka, a scarf, a shirt, a pair of shoes, a pair of socks, a suit, a pair of suspenders, underwear, a wristwatch, or any other type or form of wearable item from a network page of the marketplace rendered by a browser operating on a computer device of the customer or, alternatively, on a page rendered by a dedicated shopping application operating on the computer device.]).
Regarding Claim 18, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 17, Haitani further discloses wherein selection of a particular wearable item in the display of the one or more wearable items of interest causes the software application to perform an action including any one or more of (Fig. 10; Col. 38, lines 10-29[In some embodiments, articles of clothing may be shown to a customer in predicted sizes on avatars corresponding to the customer, thereby enabling the customer to view how the articles fit on his or her body either alone or in combination with one or more other items prior to making a purchase. Sizes of the articles may be predicted for the customer based on a depth model or any other information or data regarding the customer. Referring to FIG. 14, a flow chart 1400 of one process in accordance with embodiments of the present disclosure is shown. At box 1410, a customer selects a wearable item of interest via a user interface of a marketplace. For example, the item may be a bathing suit, a bathrobe, a belt, a bracelet, a pair of glasses, a hat, a jacket, a necklace, a necktie, a pair of pants, a parka, a scarf, a shirt, a pair of shoes, a pair of socks, a suit, a pair of suspenders, underwear, a wristwatch, or any other type or form of wearable item from a network page of the marketplace rendered by a browser operating on a computer device of the customer or, alternatively, on a page rendered by a dedicated shopping application operating on the computer device]):
generating one or more user interfaces providing additional information relating to the selected wearable item,
generating one or more user interfaces displaying the three-dimensional model of the body part including graphical representations of the selected wearable item worn by the three-dimensional model of the body part (Fig. 10; Col. 38, lines 10-29);
generating one or more user interfaces enabling purchase of the selected wearable item; or
operating a web browser to display a page associated with a retailer of the wearable item to thereby enable purchase of the wearable item from the retailer's online store.
Regarding Claim 19, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the one or more processors further enable retailers to upload and manage details relating to the wearable items that the retailer offers for purchase, the details including the detailed dimensions of the wearable item in each available size (Fig. 1G[indicating uploading and managing of clothing data]; Col. 17, lines 32-39[The data stores 214 may include any type of information or data regarding items that have been made available for sale through the marketplace 210,... For example, as is shown in FIG. 2, the data store 214 includes… clothing data 230] in view of Col. 18, lines 26-34[The clothing data 230 includes any information or data regarding one or more articles of clothing that may be used to determine an appearance or behavior of an article of clothing while the article of clothing is worn by a customer, including but not limited to data regarding colors or textures of such articles, as well as data regarding specific components from which each of the articles is formed, including sizes, shapes or dimensions of panels, sheets, stitches, seams or fasteners from which such articles are formed]), and
further including one or more of an image of the wearable item, item stock, or links to the retailer website and/or online store (Fig. 1F[element 112 showing the marketplace server] and Fig. 1G; Col. 5, lines 50-57[As is shown in FIG. 1G, the marketplace server 112 may identify a plurality of items 130A, 130B (e.g., polo shirts) in response to the entry of the keyword at the network page 116-1, and may display a network page 116-2 including information or data regarding such items 130A, 130B (e.g., names, images, prices, details, customer ratings and/or any number of other interactive features)]).
Regarding Claim 22, Haitani discloses a computer-implemented method for facilitating the purchase of wearable items in an online environment, the method including (Fig. 1G-H; Col. 5, line 51 to Col. 6, line 22[As is shown in FIG. 1G, the marketplace server 112 may identify a plurality of items 130A, 130B (e.g., polo shirts) in response to the entry of the keyword at the network page 116-1, and may display a network page 116-2 including information or data regarding such items 130A, 130B (e.g., names, images, prices, details, customer ratings and/or any number of other interactive features). After the customer 170 selects the item 130A shown on the network page 116-2, a network page 116-3 including a plurality of details regarding the item 130A is displayed, as is shown in FIG. 1H. The network page 116-3 further depicts a visual representation 140 of the item 130A on an avatar 122 of the customer 170 that is derived based on the point cloud 120.]):
obtaining, using an image capture facility, multiple optical images of a body part of an individual (Figs. 1A[showing the image capture facility] and 3A; Col. 26, line 58 to Col. 27, line 11[At box 320, a customer positions himself or herself within a field of view of an imaging device. For example, referring again to FIG. 1A, an imaging device such as the depth sensing camera 180 may be mounted within the customer's home, or within a bricks-and-mortar retail establishment or other publicly accessible location, and aligned to enable a customer to freely stand in any pose or execute any gesture within its field of view. At box 325, the imaging device captures a plurality of depth images of the customer in a number of orientations.]) and
resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images to further provide relative and/or absolute sizing information regarding the three-dimensional model of the body part (Fig. 4; Col. 28, line 60 to Col. 29, line 5[a single depth image, e.g., any one of the depth images 420A, 420B, 420C, 420D, 420E, 420F, 420G, 420H, or a single visual image may be used to generate an avatar or other three-dimensional model of a customer. For example, in some embodiments, where a single depth image or a single visual image is captured or otherwise obtained from the customer (e.g., uploaded by the customer), one or more dimensions or other physical properties of the customer may be determined based on the single depth image or the single visual image and used to select a corresponding virtual mannequin or a virtual body template of a customer from which an avatar or three-dimensional model may be generated.] in view of Col. 18, lines 12-25[The body templates 224 may be any collections of information or data regarding one or more bodies having standard sizes, shapes or dimensions, any of which may be selected as a basis for generating an avatar 222 or other three-dimensional model, based on a comparison or proximity to surface data 220 of a customer. Each of the body templates 224 may include unique data points corresponding to lengths, circumferences, diameters or thicknesses of heads, necks, shoulders, backs, arms, waists, hips, seats, legs or feet. One of the body templates 224 may be selected based on a comparison of the respective data points to position data (e.g., skeletal data) of a customer]);
wherein obtaining the multiple optical images of the body part includes: utilizing an optical hardware component associated with a mobile data communications device, including passing an optical lens associated with the optical hardware component over the body part (Col. 7, lines 18-44[Those of ordinary skill in the pertinent arts will recognize that imaging data, e.g., visual imaging data, depth imaging data, infrared imaging data, or imaging data of any other type or form, may be captured using one or more imaging devices such as digital cameras, depth sensors, range cameras, infrared cameras or radiographic cameras… Such data files may also be printed, displayed on one or more broadcast or closed-circuit television networks, or transmitted over a computer network as the Internet]; see ¶0013 of the instant specification where a smartphone camera being capable of capturing images of the body of an individual is stated as being an example of “passing an optical lens over the body part”); and
providing adequate capture of body part images, and audible and/or visual prompts to the individual when capturing the multiple optical images to enable generation of the three-dimensional model of the body part to determine the dimensions of features in the model (Fig. 1A and 4; Col. 25, lines 1-7[The computers, servers, devices and the like described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to provide any of the functions or services described herein and/or achieve the results described herein.]);
receiving, by one or more processors, input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the wearable items (Figs. 1A, 1F and 1G; Col. 5, line 23 to Col. 6, line 47[the customer 170 may access one or more network pages 116-1 hosted by the marketplace server 112 or otherwise associated with an electronic marketplace. The marketplace server 112 may have a variety of data regarding clothing… After the customer 170 selects the item 130A shown on the network page 116-2, a network page 116-3 including a plurality of details regarding the item 130A is displayed, as is shown in FIG. 1H… The systems and methods disclosed herein enable customers to virtually “try on” articles of clothing prior to purchasing them from an electronic marketplace, and to select from one or more models or sizes of clothing without having to set foot in a bricks-and-mortar retailer] in view of Col. 17, lines 16-31; see also ¶0004 of the instant specification where size and dimension are used interchangeably);
storing, by one or more processors, the physical dimensions of the body part, as determined from the sizing information provided by the three- dimensional model (Fig. 2; Col. 17, lines 32-58[The data stores 214 may include any type of information or data regarding items that have been made available for sale through the marketplace 210, or ordered by customers, such as the customer 270, from the marketplace 210, or any information or data regarding customers. For example, as is shown in FIG. 2, the data store 214 includes surface data 220, one or more avatars 222, one or more body templates 224, clothing data 230, one or more customer profiles 232 and context data 234. The surface data 220 may include any information or data such as depth images (e.g., information relating to distances of surfaces of objects such as customers or clothing within a scene from a perspective of an imaging device), point clouds (e.g., a grouping of data points corresponding to external surfaces of customers derived from depth data), visual images (e.g., black-and-white, grayscale, or color images) or any other representation of data corresponding to surfaces of an object, with individual points in space having coordinates defining their respective locations in absolute terms or relative to an imaging system according to a standard coordinate system]; Examiner notes that data stores are comparable to databases);
searching for and identifying, by a searching facility, one or more wearable items of interest in the range of wearable items offered for purchase that include similar detailed dimensions as compared with the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model according to a similarity threshold (Fig. 1A[the marketplace is comparable to a searching facility]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]); and
generating and providing, by one or more processors, a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part (Fig. 1A[the marketplace is comparable to a searching facility]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Although Haitani discloses capturing images of a user’s body parts and generating a 3D model of their body, Haitani does not explicitly disclose capturing images of body parts along with an object of known dimensions that is attached to, or located in proximity with, the body part and thereby also in view of the image capture facility and a comparison of the body part images with the object of known dimensions.
However, Lawrence teaches capturing images of objects of interest along with an object of known dimensions nearby in order to generate a model through a comparison with the object of known dimensions (¶0026[For example, the electronic device can determine the ratio of lengths of the user's arm to the user's torso or legs. In some embodiments, the electronic device can determine an absolute or exact value for dimensions of the user's body by comparing the body lengths with known lengths in an image (e.g., relative to a known car length in the image, or other object in the image). The electronic device can use any suitable number of images to generate an avatar, including for example several images from different angles.]).
The method of Lawrence is applicable to the method of Haitani as they share characteristics and capabilities, namely, they are both targeted to generating avatars for a user to apply articles of clothing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as disclosed by Haitani to include using an object of known dimensions as reference to calculate dimensions as taught by Lawrence. One of ordinary skill in the art would have been motivated to expand the method of Haitani in order to define a personalized avatar providing a true representation of the user's body (¶0006).
Although Haitani discloses providing capture of body part images, audible or visual responses to the individual in order to generate 3D models of body parts, Haitani in view of Lawrence does not explicitly teach providing guidance regarding adequate capture of body part images, audible or visual prompts to guide the individual when capturing images including when they have attained sufficient images to enable generation of the model with sufficient data.
However, Blanchflower teaches providing guidance regarding sufficient and adequate data to generate 3D models (Fig. 1; ¶0012[The mobile computing device 110 includes an object capture guidance user interface 115 that helps to guide the user in capturing sufficient views of the target object 120 such that an accurate, well-defined three-dimensional model of the object may be created.]).
The method of Blanchflower is applicable to the method of Haitani in view of Lawrence as they share characteristics and capabilities, namely, they are all targeted to generating 3D models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence to include guiding the user to ensure capture of sufficient data as taught by Blanchflower. One of ordinary skill in the art would have been motivated to expand the method of Haitani in view of Lawrence in order to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object (Abstract).
Regarding Claim 23, Haitani discloses a non-transitory computer-readable medium having a plurality of computer instructions stored thereon executable by one or more processors, that, when executed, cause the one or more processors to perform the steps of (Figs. 1G-H; Col. 26, lines 3-25):
obtaining using an image capture facility, multiple optical images of a body part of an individual (Figs. 1A[showing the image capture facility] and 3A; Col. 26, line 58 to Col. 27, line 11[At box 320, a customer positions himself or herself within a field of view of an imaging device. For example, referring again to FIG. 1A, an imaging device such as the depth sensing camera 180 may be mounted within the customer's home, or within a bricks-and-mortar retail establishment or other publicly accessible location, and aligned to enable a customer to freely stand in any pose or execute any gesture within its field of view. At box 325, the imaging device captures a plurality of depth images of the customer in a number of orientations.]), and
resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images to provide relative and/or absolute sizing information regarding the three-dimensional model of the body part (Fig. 4; Col. 28, line 60 to Col. 29, line 5[a single depth image, e.g., any one of the depth images 420A, 420B, 420C, 420D, 420E, 420F, 420G, 420H, or a single visual image may be used to generate an avatar or other three-dimensional model of a customer. For example, in some embodiments, where a single depth image or a single visual image is captured or otherwise obtained from the customer (e.g., uploaded by the customer), one or more dimensions or other physical properties of the customer may be determined based on the single depth image or the single visual image and used to select a corresponding virtual mannequin or a virtual body template of a customer from which an avatar or three-dimensional model may be generated.] in view of Col. 18, lines 12-25[The body templates 224 may be any collections of information or data regarding one or more bodies having standard sizes, shapes or dimensions, any of which may be selected as a basis for generating an avatar 222 or other three-dimensional model, based on a comparison or proximity to surface data 220 of a customer. Each of the body templates 224 may include unique data points corresponding to lengths, circumferences, diameters or thicknesses of heads, necks, shoulders, backs, arms, waists, hips, seats, legs or feet. One of the body templates 224 may be selected based on a comparison of the respective data points to position data (e.g., skeletal data) of a customer]);
wherein obtaining the multiple optical images of the body part includes: utilizing an optical hardware component associated with a mobile data communications device, including passing an optical lens associated with the optical hardware component over the body part (Col. 7, lines 18-44[Those of ordinary skill in the pertinent arts will recognize that imaging data, e.g., visual imaging data, depth imaging data, infrared imaging data, or imaging data of any other type or form, may be captured using one or more imaging devices such as digital cameras, depth sensors, range cameras, infrared cameras or radiographic cameras… Such data files may also be printed, displayed on one or more broadcast or closed-circuit television networks, or transmitted over a computer network as the Internet]; see ¶0013 of the instant specification where a smartphone camera being capable of capturing images of the body of an individual is stated as being an example of “passing an optical lens over the body part”); and
providing adequate capture of body part images, and audible and/or visual prompts to the individual when capturing the multiple optical images to enable generation of the three-dimensional model of the body part to determine the dimensions of features in the model (Fig. 1A and 4; Col. 25, lines 1-7[The computers, servers, devices and the like described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to provide any of the functions or services described herein and/or achieve the results described herein.]);
receiving input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the items (Figs. 1A, 1F and 1G; Col. 5, line 23 to Col. 6, line 47[the customer 170 may access one or more network pages 116-1 hosted by the marketplace server 112 or otherwise associated with an electronic marketplace. The marketplace server 112 may have a variety of data regarding clothing… After the customer 170 selects the item 130A shown on the network page 116-2, a network page 116-3 including a plurality of details regarding the item 130A is displayed, as is shown in FIG. 1H… The systems and methods disclosed herein enable customers to virtually “try on” articles of clothing prior to purchasing them from an electronic marketplace, and to select from one or more models or sizes of clothing without having to set foot in a bricks-and-mortar retailer] in view of Col. 17, lines 16-31; see also ¶0004 of the instant specification where size and dimension are used interchangeably);
storing, in one or more databases, the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model (Fig. 2; Col. 17, lines 32-58[The data stores 214 may include any type of information or data regarding items that have been made available for sale through the marketplace 210, or ordered by customers, such as the customer 270, from the marketplace 210, or any information or data regarding customers. For example, as is shown in FIG. 2, the data store 214 includes surface data 220, one or more avatars 222, one or more body templates 224, clothing data 230, one or more customer profiles 232 and context data 234. The surface data 220 may include any information or data such as depth images (e.g., information relating to distances of surfaces of objects such as customers or clothing within a scene from a perspective of an imaging device), point clouds (e.g., a grouping of data points corresponding to external surfaces of customers derived from depth data), visual images (e.g., black-and-white, grayscale, or color images) or any other representation of data corresponding to surfaces of an object, with individual points in space having coordinates defining their respective locations in absolute terms or relative to an imaging system according to a standard coordinate system]; Examiner notes that data stores are comparable to databases);
searching for and identifying, by a searching facility, one or more wearable items of interest in the range of wearable items offered for purchase that include similar detailed dimensions as compared with the physical dimensions of the body part, as determined from the sizing information provided by the three- dimensional model, according to a similarity threshold (Fig. 1A[the marketplace is comparable to a searching facility]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]), and
generating and providing a display of the one or more wearable items of interest that include based on the similarity threshold, similar dimensions as compared with the dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part (Fig. 1A[the marketplace is comparable to a searching facility]; Col. 15, lines 51-67[a recommendation of an item, or a specific size or style of the item, may be identified for a customer based on his or her avatar or other three-dimensional model. For example, rather than generally identifying items for customers by traditional techniques such as collaborative filtering, e.g., by identifying items based on prior purchases by a customer or another customer, some embodiments of the present disclosure may identify items, or sizes or styles of items, based on items that were previously purchased or worn by customers having similar body sizes, shapes or dimensions that are similar to those of the customer.] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Although Haitani discloses capturing images of a user’s body parts and generating a 3D model of their body, Haitani does not explicitly disclose capturing images of body parts along with an object of known dimensions that is attached to, or located in proximity with, the body part and thereby also in view of the image capture facility and a comparison of the body part images with the object of known dimensions.
However, Lawrence teaches capturing images of objects of interest along with an object of known dimensions nearby in order to generate a model through a comparison with the object of known dimensions (¶0026[For example, the electronic device can determine the ratio of lengths of the user's arm to the user's torso or legs. In some embodiments, the electronic device can determine an absolute or exact value for dimensions of the user's body by comparing the body lengths with known lengths in an image (e.g., relative to a known car length in the image, or other object in the image). The electronic device can use any suitable number of images to generate an avatar, including for example several images from different angles.]).
The system of Lawrence is applicable to the system of Haitani as they share characteristics and capabilities, namely, they are both targeted to generating avatars for a user to apply articles of clothing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as disclosed by Haitani to include using an object of known dimensions as reference to calculate dimensions as taught by Lawrence. One of ordinary skill in the art would have been motivated to expand the system of Haitani in order to define a personalized avatar providing a true representation of the user's body (¶0006).
Although Haitani discloses providing capture of body part images, audible or visual responses to the individual in order to generate 3D models of body parts, Haitani in view of Lawrence does not explicitly teach providing guidance regarding adequate capture of body part images, audible or visual prompts to guide the individual when capturing images including when they have attained sufficient images to enable generation of the model with sufficient data.
However, Blanchflower teaches providing guidance regarding sufficient and adequate data to generate 3D models (Fig. 1; ¶0012[The mobile computing device 110 includes an object capture guidance user interface 115 that helps to guide the user in capturing sufficient views of the target object 120 such that an accurate, well-defined three-dimensional model of the object may be created.]).
The system of Blanchflower is applicable to the system of Haitani in view of Lawrence as they share characteristics and capabilities, namely, they are all targeted to generating 3D models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence to include guiding the user to ensure capture of sufficient data as taught by Blanchflower. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in order to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object (Abstract).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani in view of Lawrence in view of Blanchflower in view of Smith et al. (US 2020/0193502 A1 [previously cited]).
Regarding Claim 8, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the resolution of the images to determine the three-dimensional model is assisted by data input, including data entered in relation to one or more physical dimensions of the body part from the resolution of images and generation of the model (Fig. 4; Col. 28, line 60 to Col. 29, line 5[a single depth image, e.g., any one of the depth images 420A, 420B, 420C, 420D, 420E, 420F, 420G, 420H, or a single visual image may be used to generate an avatar or other three-dimensional model of a customer. For example, in some embodiments, where a single depth image or a single visual image is captured or otherwise obtained from the customer (e.g., uploaded by the customer), one or more dimensions or other physical properties of the customer may be determined based on the single depth image or the single visual image and used to select a corresponding virtual mannequin or a virtual body template of a customer from which an avatar or three-dimensional model may be generated.]).
Although Haitani discloses inputting data including data entered by an individual in relation to physical dimensions from images and a model, Haitani in view of Lawrence in view of Blanchflower does not explicitly teach data input by the individual, including data manually entered by the individual in relation to physical dimensions of the body part unable to be determined.
However, Smith et al., hereinafter, Smith, teaches manual data input by a user for missing data (¶0107[Data Input—Concrete metadata is used to categorize the garment, by attributing the related variables into the system in one of three ways. First, the user will photograph the garment using the mobile application. The mobile application will identify, then look up the specific item from a master clothing database using visual identification technology (such as visual computing). Second, for items that are missing from the master database (such as vintage garments), the user can photograph the garment and manually input the related variables (e.g., brand, size, item type, etc.). Third, the user can forward a digital copy of the receipt from the vendor, directly to their user account (connected to the user's dataset)]).
The system of Smith is applicable to the system of Haitani in view of Lawrence in view of Blanchflower as they share characteristics and capabilities, namely, they are all targeted to online modeling of objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence in view of Blanchflower to include manual input of data as taught by Smith. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in view of Blanchflower in order to make suggestions on what to wear or purchase to follow current styles or trends (¶0011).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani in view of Lawrence in view of Blanchflower in view of Yu et al. (US 2017/0213395 A1 [previously cited]).
Regarding Claim 11, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein the one or more processors are further operable to received optical image(s) once the physical dimensions of the body part have been determined from the three-dimensional model (Col. 5, lines 10-15[The point cloud 120 may be stored on the marketplace server 112 or in one or more other data stores and associated with the customer 170, e.g., in a profile of data corresponding to the customer 170.]).
Although Haitani discloses receiving images to determine physical dimensions of a user’s body, Haitani in view of Lawrence in view of Blanchflower does not explicitly teach operable to delete any received images once the dimensions are determined.
However, Yu et al., hereinafter, Yu, teaches deleting data once the data has been used (¶0060[ After the data retrieved from the data buffers has been validated as ready, it is considered to be active, and scene engine 103 deletes the original data from its pipeline. In one variant, scene engine 103 deletes the original data from its pipeline only after it is confirmed that the original data from its pipeline is not being used]).
The system of Yu is applicable to the system of Haitani in view of Lawrence in view of Blanchflower as they share characteristics and capabilities, namely, they are all targeted to online modeling of objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence in view of Blanchflower to include deleting data as taught by Yu. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in view of Blanchflower in order to delete a message from main memory and/or secondary storage as a message may become outdated and/or there is not enough room in main memory and/or secondary storage to store new messages (¶0068).
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani in view of Lawrence in view of Blanchflower in view of Vierra et al. (US 2015/0186975 A1 [previously cited]).
Regarding Claim 16, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 1, Haitani further discloses wherein, in the event there are results in the display of the one or more wearable items of interest on the basis that there are items located having similar detailed dimensions as compared with the physical dimensions of the body part according to the similarity threshold, the one or more processors are further operable to generate a prompt to the individual of the nearest dimensions (Fig. 1H, 8A [figures showing display of items that are of similar dimensions as the physical dimensions of user’s body]; Col. 32, lines 7-33[As is shown in FIG. 8A, a network page 816 associated with an electronic marketplace includes an image of an avatar 822 corresponding to a customer… the network page 816 specifies a recommended size of the article of clothing 830-1 for the customer, which may be identified based on the customer's known or determined sizes, shapes or dimensions, as well as the customer's purchasing history] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Although Haitani discloses displaying results of items having similar dimensions as compared with the physical dimensions of the body part, Haitani in view of Lawrence in view of Blanchflower does not explicitly teach there being no results one items on the basis that there are no items located having similar dimensions and prompting the individual to broaden their search such that the listing includes items of nearest dimensions.
However, Vierra et al., hereinafter, Vierra, teaches broadening search criteria to include items of nearest similarity (Fig. 6[showing no results being located]; ¶0013[The matching criteria may also include one or more filters that the user may use to broaden or narrow the search (e.g., on a field by field basis). For example, the matching criteria may include a filter on a product name field that specifies that the search should only return any product listings that have a product name that exactly matches the name of the particular product (or one or more specified abbreviations or variations of the name). Alternatively, the matching criteria may include a broader filter for the product name field specifying that the search return product listings that include a product name that is similar to the name of the particular product (or one or more specified abbreviations or variations of the name)] in view of ¶0090).
The system of Vierra is applicable to the system of Haitani in view of Lawrence in view of Blanchflower as they share characteristics and capabilities, namely, they are all targeted to online modeling of objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the item searching method as taught by Haitani in view of Lawrence in view of Blanchflower to include no results being found and broadening the search as taught by Vierra. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in view of Blanchflower in order to retrieve one or more matching product listings for a particular product (¶0003).
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani in view of Lawrence in view of Blanchflower in view of Leggett et al. (US 2006/0031123 A1 [previously cited]).
Regarding Claim 20, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 19, Haitani further discloses wherein the stored input data relating to each wearable item with one or more computer systems associated with the retailer to ensure that the stored input data is maintained (Fig. 1G[indicating uploading and managing of clothing data]; Col. 17, lines 32-39[The data stores 214 may include any type of information or data regarding items that have been made available for sale through the marketplace 210,... For example, as is shown in FIG. 2, the data store 214 includes… clothing data 230] in view of Col. 18, lines 26-34[The clothing data 230 includes any information or data regarding one or more articles of clothing that may be used to determine an appearance or behavior of an article of clothing while the article of clothing is worn by a customer, including but not limited to data regarding colors or textures of such articles, as well as data regarding specific components from which each of the articles is formed, including sizes, shapes or dimensions of panels, sheets, stitches, seams or fasteners from which such articles are formed]).
Although Haitani discloses data relating to wearable items, Haitani in view of Lawrence in view of Blanchflower does not explicitly teach data that regularly synchronizes to maintain up to date status.
However, Leggett et al., hereinafter, Leggett, teaches a system that allows merchants to regularly update product information (¶0021[Generally, merchants should update their product information on a regular basis to ensure that product database 230 contains up-to-date, reliable information. In alternate embodiments, merchants may submit updates daily to the third-party host, or the third-party host may provide an automated process that allows a merchant to update product information through a website or client program. These methods are well known for those in the art and need not be described further here.]).
The system of Leggett is applicable to the system of Haitani in view of Lawrence in view of Blanchflower as they share characteristics and capabilities, namely, they are all targeted to online modeling of objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the stored data as taught by Haitani in view of Lawrence in view of Blanchflower to include regularly synchronizing data as taught by Leggett. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in view of Blanchflower in order to reflect changing market conditions (¶0002).
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Haitani in view of Lawrence in view of Blanchflower in view of Adelman et al. (US 2009/0210320 A1 [previously cited]).
Regarding Claim 21, Haitani in view of Lawrence in view of Blanchflower teaches the system according to claim 19, Haitani further discloses wherein the one or more processors are further operable to integrate with the data of a retailer's online store, such that when the individual accesses the online store and views and/or searches wearable items of interest, those wearable items of interest that include similar detailed dimensions as compared with the physical dimensions of the body part of the individual will be displayed and/or listed in search results generated from the online store (Fig. 1H, 8A [figures showing display of items that are of similar dimensions as the physical dimensions of user’s body]; Col. 32, lines 7-33[As is shown in FIG. 8A, a network page 816 associated with an electronic marketplace includes an image of an avatar 822 corresponding to a customer… the network page 816 specifies a recommended size of the article of clothing 830-1 for the customer, which may be identified based on the customer's known or determined sizes, shapes or dimensions, as well as the customer's purchasing history] in view of Col. 13, lines 10-20[For example, where a customer is interested in purchasing an article of clothing in a specific size, an avatar or other three-dimensional model may be used to determine whether the article of clothing will fit the customer, or to determine a quality of the fit]).
Although Haitani discloses integrating data of a retailer’s online store such that users may access and view wearable items of interest, Haitani in view of Lawrence in view of Blanchflower does not explicitly teach an online store using an application programming interface (API) and displaying only wearable items that include similar dimensions as the physical dimensions of the body part of the user.
However, Adelman et al., hereinafter, Adelman, teaches online stores using an API and displaying only items that include similar dimensions as a user (¶0086[Another application for such feature is when the target match size for the target item specified by the user is not in stock or possibly not available (i.e., the item may not be manufactured to accommodate the user's size). For example, the manufacturer may only design an item for sizes 6 through 14, wherein the target match size may be less than a size 6 or greater than a size 14. This feature may be used to direct the user to other similar items that are currently in stock, available in the user's size and offered by the same retailer as the target item identified by the user. The comparative sizing scheme based on the user's current virtual closet is applied to any suggested target item so that the user's target match size is automatically displayed along with the suggested target item itself]; Examiner notes that directing the customer to sizes similar to the size of the customer is comparable to displaying only wearable items that include similar dimensions as compared with the customer’s dimensions).
The system of Adelman is applicable to the system of Haitani in view of Lawrence in view of Blanchflower as they share characteristics and capabilities, namely, they are all targeted to online modeling of objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the online store as taught by Haitani in view of Lawrence in view of Blanchflower to include an API and displaying items with similar dimensions as taught by Adelman. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in view of Blanchflower in order to develop an improved sizing system and method that is user friendly (¶0010).
Response to Arguments
Applicant’s arguments on pages 10-12 of the remarks filed 01/06/2026, with respect to the previous 35
USC § 101 rejections have been fully considered but are not persuasive.
Applicant argues on pages 10 and 11 that the amended claims are directed to technical limitations and that they should be eligible under Step 2A Prong 1 for the reason that the claims do not fall within an abstract idea grouping. Examiner respectfully disagrees. According to the MPEP 2106.04, the question of whether a claim is “directed to” a judicial exception in Step 2A is now evaluated using a two-prong inquiry. Prong One asks if the claim “recites” an abstract idea, law of nature, or natural phenomenon. Under that prong, the mere inclusion of a judicial exception such as a method of organizing human activity in a claim means that the claim “recites” a judicial exception (see MPEP 2106.04 [“The mere inclusion of a judicial exception such as a mathematical formula (which is one of the mathematical concepts identified as an abstract idea in MPEP § 2106.04(a)) in a claim means that the claim "recites" a judicial exception under Step 2A Prong One.”]). Additionally, MPEP 2106.04 instructs examiners to refer to the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) (i.e., mathematical concepts, certain methods of organizing human activities, and mental processes) in order to identify abstract ideas. As noted above and in the previous office action, the claims recite item recommendation. This is an abstract idea because it is a concept of business relations which makes it a method of organizing human activity (i.e., one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2)).
The limitations argued by the applicant on pages 10 and 11 such as to obtain multiple optical images of a body part of an individual along with an object of known dimensions that is attached to, or located in proximity with, the body part, and resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part as recited in claim 1 are encompassed by the abstract idea and do not provide a solution to a technical problem. The additional elements of “an image capture facility operable to obtain” will be analyzed under Step 2A, Prong Two and not within Step 2A, Prong One.
Applicant argues on pages 11-12 of the remarks that the amended claims integrate the abstract idea into a practical application. Examiner respectfully disagrees. Facilitating the purchase of wearable items in an environment, including: obtain multiple optical images of a body part of an individual along with an object of known dimensions that is attached to, or located in proximity with, the body part, and resolving, by an optical resolution technique, the images to generate a three-dimensional model of the body part, wherein the resolving of optical images to generate the three-dimensional model includes comparison of the body part images with the object of known dimensions to further provide relative and/or absolute sizing information regarding the three-dimensional model of the body part, wherein obtaining the multiple optical images of the body part includes: utilizing, including passing over the body part; and providing guidance regarding adequate capture of body part images, and audible and/or visual prompts to guide the individual when capturing the multiple optical images including when they have attained sufficient images to enable generation of the three-dimensional model of the body part with sufficient data to determine the dimensions of features in the model; receive input data from one or more retailers offering a range of wearable items for purchase, the input data including detailed dimensions of the wearable items; store the physical dimensions of the body part, as determined from the sizing information provided by the three-dimensional model; identify one or more wearable items of interest in the range of items offered for purchase that include detailed dimensions similar to the physical dimensions of the body part as determined from the sizing information provided by the three-dimensional model, according to a similarity threshold; generate and provide a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part as recited in amended claim 1 are all part of the abstract idea. The mere execution of the abstract idea on generic components which are recited at a high level does not overcome the rejection. The components of an “image capture facility”, an “optical hardware component associated with a mobile data communication device”, an “optical lens associated with the optical hardware component”, “processors”, “databases”, and a “searching facility” are described as generic and at a high level in ¶0013, ¶0044, ¶0054, and ¶¶0073-0078 of the instant specification.
Furthermore, claiming the improved speed or efficiency inherent with applying the abstract idea on a computer does not integrate the judicial exception into a practical application or provide an inventive concept, refer to the MPEP 2106.05(f)(2).
Accordingly, Examiner maintains that the invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus the 35 USC §101 rejections are maintained.
Applicant’s arguments on pages 12-15 of the remarks filed 01/06/2026, with respect to the previous 35 USC § 103 rejections have been fully considered but are not persuasive.
Applicant argues on pages 12-13 of the remarks that Haitani does not disclose “selection of products” and “generation and display of clothing articles from a larger set.” Examiner respectfully disagrees. Haitani describes a marketplace that has a “variety of data regarding clothing” and allows a customer to select a clothing item and displays information regarding the item as a result, see Haitani Col. 5, line 23 to Col. 6, line 47. The aforementioned process is comparable to selecting a product from a subset of clothing products. Therefore, Haitani discloses “selection of products” and “generation and display of clothing articles from a larger set.”
Applicant argues on pages 13-14 that Haitani in view of Lawrence does not teach "generate and provide a display of the one or more wearable items of interest that include based on the similarity threshold, similar detailed dimensions as compared with the physical dimensions of the body part according to the sizing information provided by the three-dimensional model of the body part.” Examiner respectfully disagrees. Haitani describes the recommendation of a clothing item “or a specific size…” of the clothing item for a customer based on a three-dimensional model of the user. Furthermore, Haitani discloses identifying clothing items for a user based on items that were purchased by other customers with similar sizes as the customer, see Haitani Col. 15, lines 51-67 and Figs. 1H, 8A, 8B, and 10 which show displaying of clothing on a user’s avatar. Furthermore, Col. 13, lines 10-20 of Haitani detail a specific example of an avatar being used to determine the fit of a clothing item on a user. This is comparable to generating and providing a display of wearable items of interest based on a similarity threshold with similar dimensions as compared with the physical dimensions of the body part according to sizing information provided by a three-dimensional model of a body part.
Applicant further argues on page 14 of the remarks that Haitani fails to disclose an “optical hardware component associated with a mobile data communications device” to generate images. Examiner respectfully disagrees. Haitani describes a variety of imaging devices that may be used in order to generate images. Haitani describes devices such as digital cameras, depth sensors, range cameras, infrared cameras or radiographic cameras, see Haitani Col. 7, lines 18-44. As per MPEP 2111, the pending claims must be given their broadest reasonable interpretation consistent with the specification. Applicant’s arguments regarding the type of image devices used are a narrow interpretation of the claims. Furthermore, according to the MPEP 2111.01(II), it is improper to import claim limitations from the specification when interpreting the claims under broadest reasonable interpretation.
Applicant argues on page 14-15 of the remarks that Haitani fails to disclose that “an individual is provided with prompts (audible and/or visual)” when capturing images. Examiner respectfully disagrees. Figs. 1A and 4 of Haitani show a user interacting with a user interface on a device when capturing body images. The specification of Haitani describes that “any of the functions or services described herein” utilize “visual or audio user interfaces,” for a user interacting with a device, see Haitani Col. 25, lines 1-7. This is comparable to an individual that is provided with prompts when capturing images as stated in amended claim 1. As mentioned previously the pending claims must be given their broadest reasonable interpretation consistent with the specification and it is improper to import claim limitations from the specification when interpreting the claims under broadest reasonable interpretation.
Applicant argues on page 15 of the remarks that there is a lack of motivation to combine reference Blanchflower with Haitani in view of Lawrence. Examiner respectfully disagrees. Haitani describes defining three-dimensional avatars for users based on imaging data in order to depict how clothing will appear or behave while being worn by a customer. Lawrence describes a personalized avatar that provides a three-dimensional representation of a user’s body with different clothing applied. Blanchflower describes a method for three-dimensional object modeling to view an object in three-dimensions. These inventions are all targeted to the generation and depiction of three-dimensional models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D model generation as taught by Haitani in view of Lawrence to include guiding the user to ensure capture of sufficient data as taught by Blanchflower. One of ordinary skill in the art would have been motivated to expand the system of Haitani in view of Lawrence in order to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object (Abstract).
Accordingly, references Haitani, Lawrence, Blanchflower have been maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHOORA LADONI whose email is Ahoora.Ladoni@uspto.gov and telephone number is (703) 756-5617. The examiner can normally be reached M-F 0900–1700 ET.
Examiner interviews are available via telephone, in-person and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHOORA LADONI/Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689