Prosecution Insights
Last updated: April 19, 2026
Application No. 18/890,136

SYSTEM, PLATFORM, DEVICE AND METHOD FOR PERSONALIZED SHOPPING

Non-Final OA §101§103§DP
Filed
Sep 19, 2024
Examiner
PRESTON, ASHLEY DAWN
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nike, Inc.
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
68%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
71 granted / 169 resolved
-10.0% vs TC avg
Strong +26% interview lift
Without
With
+25.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
211
Total Applications
across all art units

Statute-Specific Performance

§101
43.7%
+3.7% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
5.5%
-34.5% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Status of Claims This action is in reply to the claims filed on 19 September 2024. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statements filed on 21 February 2025, 21 February 2025, 22 August 2025, 06 November 2025, 02 February 2026, have been considered. An initialed copy of the Forms 1449 is enclosed herewith. Claim Objections Claim 11 is objected to because of the following informalities: the claim recites the mobile computing device is communicatively connected to the cloud-based server to communication image capture data to the cloud-based server, which appears to be a typographical error, and should be recited as communicate. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea without significantly more). Under step 1, it is determined whether the claims are directed to a statutory category of invention (see MPEP 2106.03(II)). In the instant case, claims 1-10 are directed to a method, claims 11-17 are directed to a system, and claims 18-20 are directed to a system. While the claims fall within statutory categories, under revised Step 2A, Prong 1 of the eligibility analysis (MPEP 2106.04), the claimed invention recites an abstract idea of personalized shopping. Specifically, representative claim 1 recites the abstract idea of: displaying markings to guide a user to allow an accurate scan of a part of a user’s body with respect to a reference object; obtaining an image of the part of the user’s body; processing the image to automatically identify one or more features of the part of the user’s body; and generating a personalized user shopping avatar based on generating a model of the part of the user’s body, wherein the personalized user shopping avatar comprises user body measurement data calculated from the processed image; generating at least one product recommendation based on matching one or more products from one or more product data sources to the personalized user shopping avatar; interactively presenting, and using the personalized shopping assistant, one or more matched products worn on the personalized user shopping avatar; receiving feedback relating to the one or more matched products worn on the personalized user shopping avatar; and upon receiving the feedback, automatically generating an updated interactive presentation of the personalized user shopping avatar and one or more modified product recommendations in accordance with the feedback. Under revised Step 2A, Prong 1 of the eligibility analysis, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter groupings articulated in 2106.04(a) of the MPEP. Even in consideration of the analysis, the claims recite an abstract idea. Representative claim 1 recites the abstract idea of personalized shopping, as noted above. This concept is considered to be a method of organizing human activity. Certain methods of organizing human activity include “fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).” MPEP 2106.04(a)(2)(II). In this case, the abstract idea recited in representative claim 1 is a certain method of organizing human activity because it relates to sale activities since the claims specifically recite the steps for the personalized shopping that comprise displaying markings to guide a user to allow an accurate scan of a part of a user’s body with respect to a reference object, obtaining an image of the part of the user’s body, automatically identify one or more features of the part of the user’s body, generating a personalized user shopping avatar based on generating a model of the part of the user’s body, wherein the personalized user shopping avatar comprises user body measurement data calculated from the processed image, generating at least one product recommendation based on matching one or more products from one or more product data sources to the personalized user shopping avatar, interactively presenting, and using the personalized shopping assistant, one or more matched products worn on the personalized user shopping avatar, receiving feedback relating to the one or more matched products worn on the personalized user shopping avatar, and upon receiving the feedback, automatically generating an updated interactive presentation of the personalized user shopping avatar and one or more modified product recommendations in accordance with the feedback, thereby making this a sales activity or behavior. Thus, representative claim 1 recites an abstract idea. Under Step 2A, Prong 2 of the eligibility analysis, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. MPEP 2106.04(d). The courts have identified limitations that did not integrate a judicial exception into a practical application include limitations merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). MPEP 2106.04(d). In this case, representative claim 1 includes additional elements: a display of a mobile computing device, using a 3D scanner on an imaging module attached to or integrated with a back portion of the mobile computing device, wherein the 3D scanner is configured to scan the part of the user’s body, display of the mobile computing device, application, and a remote computing device. Although reciting such additional elements, the additional elements do not integrate the abstract idea into a practical application because they merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a computer as a tool to perform the abstract idea. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. Similar to the limitations of Alice, representative claim 1 merely recites a commonplace business method (i.e., personalized shopping) being applied on a general-purpose computer using general purpose computer technology. MPEP 2106.05(f). Thus, the claimed additional elements are merely generic elements and the implementation of the elements merely amounts to no more than an instruction to apply the abstract idea using a generic computer. Since the additional elements merely include instructions to implement the abstract idea on a generic computer or merely use a generic computer as a tool to perform an abstract idea, the abstract idea has not been integrated into a practical application. Under Step 2B of the eligibility analysis, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). MPEP 2106.05. In this case, as noted above, the additional elements recited in independent claim 1 are recited and described in a generic manner merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea. Even when considered as an ordered combination, the additional elements of representative claim 1 do not add anything that is not already present when they considered individually. In Alice, the court considered the additional elements “as an ordered combination,” and determined that “the computer components…‘ad[d] nothing…that is not already present when the steps are considered separately’… [and] [v]iewed as a whole…[the] claims simply recite intermediated settlement as performed by a generic computer.” Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, (2014) (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Similarly, when viewed as a whole, representative claim 1 simply conveys the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B of the Alice/Mayo test, there are no meaningful limitations in representative claim 1 that transforms the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself. As such, representative claim 1 is ineligible. Independent claims 11 and 18 are similar in nature to representative claim 1 and Step 2A, Prong 1 analysis is the same as above for representative claim 1. It is noted that in independent claim 11 includes the additional elements of a cloud based server including a profile module, a product module, and a product module, an application running on a mobile computing device, wherein the mobile computing device includes an imaging module attached to or integrated with a back portion of the mobile computing device, and the mobile computing device is communicatively connected to the cloud-based server to communication image capture data to the cloud-based server and independent claim 18 includes the additional element of a mobile device, a touch screen, and a processor having registers adapted. The Applicant’s specification does not provide any discussion or description of the claimed additional elements in claims 11 and 18, as being anything other than generic elements. Thus, the claimed additional elements of claims 11 and 18 are merely generic elements and the implementation of the elements merely amounts to no more than an instruction to apply the abstract idea using a generic computer. As such, the additional elements of claims 11 and 18do not integrate the judicial exception into a practical application of the abstract idea. Additionally, the additional elements of claims 11 and 18 considered individually and in combination, do not provide an inventive concept because they merely amount to no more than an instruction to apply the abstract idea using a generic computer. As such, claims 11 and 18 are ineligible. Dependent claims 2-10, 12-17, and 19-20, depending from claims 1, 11, and 18 respectively, do not aid in the eligibility of the independent claim 1. The claims of 2-10, 12-17, and 18-20 merely act to provide further limitations of the abstract idea and are ineligible subject matter. It is noted that dependent claims include the additional elements of a machine learning engine (claims 3 & 14), a depth sensor (claims 5 & 20), a social network (claim 7), a virtual simulation (claim 9), and a personalized product ordering module (claim 12). Applicant’s specification does not provide any discussion or description of the claimed additional elements as being anything other than a generic element. The claimed additional elements, individually and in combination do not integrate into a practical application and do not provide an inventive concept because they are merely being used to apply the abstract idea using a generic computer (see MPEP 2106.05(f)). Accordingly, claims 3, 5, 7, 9, 12, 17, and 20 are directed towards an abstract idea. Additionally, the additional elements of claims 3, 5, 7, 9, 12, 17, and 20 considered individually and in combination, do not provide an inventive concept because they merely amount to no more than an instruction to apply the abstract idea using a generic computer. It is further noted that the remaining dependent claims 2, 4, 6, 8, 10, 13, 15-17, and 19 do not recite any further additional elements to consider in the analysis, and therefore would not provide additional elements that would integrate the abstract idea into a practical application and would not provide an inventive concept. As such, the dependent claims 2-10, 12-17, and 19-20 are ineligible. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of US Patent No. 12,131,371, hereinafter referred to as ‘371. Instant Claim 1 Claim 1 of Patent ‘371 A method comprising: A method comprising: acquiring anatomical data of a user using a personalized shopping assistant application on a mobile computing device, wherein, the mobile computing device is configured to capture anatomical data by: displaying markings on a display of a mobile computing device to guide a user to allow an accurate scan of a part of a user’s body with respect to a reference object; displaying markings on a display of the mobile computing device to guide the user to stand appropriately to allow an accurate scan of a part of a user's body with respect to a reference object; obtaining an image of the part of the user’s body using a 3D scanner on an imaging module attached to or integrated with a back portion of the mobile computing device, wherein the 3D scanner is configured to scan the part of the user’s body; automatically obtaining an image of the part of the user's body using a 3D scanner on an imaging module attached to or integrated with a back portion of the mobile computing device, wherein the 3D scanner is configured to scan the part of the user's body, and wherein the image includes multidimensional geometrical data of the scanned part of the user's body; processing the image to automatically identify one or more features of the part of the user’s body; and processing the image to automatically identify one or more features of the part of the user's body using machine vision; and calculating, based on the processed image and with respect to the reference object and the mobile computing device, user body measurement data of the part of the user's body; generating a personalized user shopping avatar based on generating a model of the part of the user’s body, wherein the personalized user shopping avatar comprises user body measurement data calculated from the processed image; generating, based on the user body measurement data, a personalized user shopping avatar based on generating a model of the part of the user's body, wherein the personalized user shopping avatar comprises the user body measurement data and user behavior data; accessing, using the personalized shopping assistant application, product data from one or more product data sources; generating at least one product recommendation based on matching one or more products from one or more product data sources to the personalized user shopping avatar; generating at least one product recommendation based on matching one or more products from the one or more product data sources to the personalized user shopping avatar; interactively presenting, on the display of the mobile computing device and using the personalized shopping assistant application, one or more matched products worn on the personalized user shopping avatar; interactively presenting, on the display of the mobile computing device and using the personalized shopping assistant application, one or more matched products worn on the personalized user shopping avatar; transmitting a prompt to a remote computing device associated with a sales associate at a point of sale to provide feedback in connection with the one or more matched products worn on the personalized user shopping avatar; receiving, from a remote computing device, feedback relating to the one or more matched products worn on the personalized user shopping avatar; and receiving, from the remote computing device associated with the sales associate, feedback relating to the one or more matched products worn on the personalized user shopping avatar; and upon receiving the feedback, automatically generating an updated interactive presentation of the personalized user shopping avatar and one or more modified product recommendations in accordance with the feedback. upon receiving the feedback, automatically generating an updated interactive presentation of the personalized user shopping avatar and one or more modified product recommendations in accordance with the feedback. Instant Claim 2 Claim 1 of Patent ‘371 wherein processing the image comprises determining multidimensional geometrical data of the scanned part of the user’s body. wherein the image includes multidimensional geometrical data of the scanned part of the user's body; Claim 3 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 of US Patent No. 12,131,371, hereinafter referred to as ‘371, in view of Fonte, T., et al. (PGP No. US 2015/0055086 A1). Instant Claim 3 Claim 1 of Patent ‘371 wherein processing the image includes using a machine learning engine to automatically recognize one or more body features. processing the image to automatically identify one or more features of the part of the user's body using machine vision; Patent ‘371 does not disclose: the image includes using a machine learning engine, but Fonte does teach the feature (Fonte, see: paragraph [0031] teaching “automatically from image data, may be used to provide further information to customize products” and “learning algorithms that enable a product design to be altered to suit a particular user”; and paragraph [0052] teaching “a learning machine or predictor or prognostication machine” and “the user provided image data…to determine user preferences for custom eyewear properties”; and paragraph [0053] teaching “In accordance with another embodiment, a system and method are disclosed for learning from a body of data”). This step of Fonte is applicable to the method of ‘371, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of ‘371, to include the features of wherein processing the image includes using a machine learning engine to automatically recognize one or more body features, as taught by Fonte. Instant Claim 4 Claim 1 of Patent ‘371 wherein the feedback is received in response to transmitting a prompt to the remote computing device at a point of sale, and wherein the prompt includes an indication to provide feedback in connection with the one or more matched products worn on the personalized user shopping avatar. transmitting a prompt to a remote computing device associated with a sales associate at a point of sale to provide feedback in connection with the one or more matched products worn on the personalized user shopping avatar; Instant Claim 5 Claim 20 of Patent ‘371 wherein obtaining the image includes using a depth sensor of the imaging module to enable structured light used to provide a 3D scan of the part of the user’s body. further comprising a depth sensor configured to provide 3D data that enables enhanced generation of a user anatomical profile for the model of the part of the user anatomy. Instant Claim 6 Claim 18 of Patent ‘371 further comprising interactively presenting modifications to at least one product recommendation based on the feedback. generate an updated interactive presentation of the personalized user shopping avatar and a modified personalized product offering in accordance with the feedback. Instant Claim 7 Claim 8 of Patent ‘371 further comprising providing a product matching feedback from a social network. further comprising providing a product matching feedback from a social network. Instant Claim 8 Claims 2, 3, 16 of Patent ‘371 wherein generating the personalized user shopping avatar includes accessing user history data and user preference data and wherein the personalized user shopping avatar includes a predicted user preference based on user history data and user preference data. further comprising accessing user history data, further comprising accessing user preference data further comprising modifying the personalized user shopping avatar in accordance with changes in one or more of user anatomical information, user anatomy data, user behavior, user history, user preferences, and social feedback. Instant Claim 9 Claim 9 of Patent ‘371 generating a virtual simulation of one or more products based on the personalized user shopping avatar and based on additional personalized shopping avatars of other users sharing common features with the personalized user shopping avatar. generating a virtual simulation of one or more products based on the personalized user shopping avatar and based on additional personalized shopping avatars of other users sharing common features with the personalized user shopping avatar. Instant Claim 10 Claim 10 of Patent ‘371 modifying the personalized user shopping avatar in accordance with changes in one or more of user anatomical information, user anatomy data, user behavior, user history, user preferences, and social feedback. modifying the personalized user shopping avatar in accordance with changes in one or more of user anatomical information, user anatomy data, user behavior, user history, user preferences, and social feedback. Instant Claim 11 Claim 15 of Patent ‘371 A platform for personalized shopping, comprising: A platform for personalized shopping, comprising: a cloud-based server including a profile module, a product module, and a product matching module; and a cloud based server including a profile module configured to generate a personalized shopping avatar based at least partially on anatomical profile data for multiple users, a product module configured to consolidate product data for multiple products, and a product matching module configured to match one or more products to one or more shopping avatars; and. a personalized shopping assistant application running on a mobile computing device, wherein the mobile computing device includes an imaging module attached to or integrated with a back portion of the mobile computing device, wherein the imaging module includes a 3D scanner configured to scan at least a part of a user’s body and obtain image capture data and to enable anatomical profile data capture of a user to be used by the profile module to generate a personalized shopping avatar for the user; a personalized shopping assistant application running on a mobile computing device, wherein the mobile computing device includes an imaging module attached to or integrated with a back portion of the mobile computing device, wherein the imaging module includes a 3D scanner configured to scan at least a part of a user's body and to enable anatomical profile data capture of a user to be used by the profile module to generate a personalized shopping avatar for the user; wherein the mobile computing device is communicatively connected to the cloud-based server to communication image capture data to the cloud-based server; wherein the mobile computing device is communicatively connected to the cloud based server; wherein image capture data and anatomical profile data are communicated to the cloud based server; wherein the profile module is configured to generate a personalized shopping avatar based at least partially on image capture data from the mobile computing device by: wherein the personalized shopping assistant application is configured to capture anatomical profile data of the user by: displaying markings on a display of the mobile automatically obtaining an image of the part of the user's body when the part is within the markings using the 3D scanner to scan the part of the user's body, wherein the image includes multidimensional anatomical data of the scanned part of the user's body; processing the image capture data to automatically identify one or more features of the part of the user’s body; and processing the image using machine vision to automatically identify one or more features of the part of the user's body; calculating, based on the processed image and with respect to the reference object and the mobile computing device, user body measurement data of the part of the user's body; and generating the personalized shopping avatar, wherein the personalized shopping avatar comprises a model of the part of the user’s body and user body measurement data calculated from the processed image capture data; generating the personalized shopping avatar based on generating a model of the part of the user's body, wherein the personalized shopping avatar comprises the user body measurement data and user behavior data; wherein the product matching module is configured to match one or more products to the personalized shopping avatar by: wherein the product matching module is configured to match one or more products to the personalized shopping avatar by: matching one or more products from consolidated product data to the personalized shopping avatar, and interactively presenting, on a display of the mobile computing device and using the personalized shopping assistant application, one or more matched products worn on the personalized shopping avatar; matching one or more products from the consolidated product data to the user body measurement data and the user behavior data of the personalized shopping avatar, and wherein the personalized shopping assistant application is further configured to: the personalized shopping assistant application, one or more matched products worn on the personalized shopping avatar; interactively presenting, on the display of the mobile computing device and using transmitting a prompt to a remote computing device associated with a sales associate at a point of sale to provide feedback in connection with the one or more matched products worn on the personalized shopping avatar; receiving feedback relating to the one or more matched products worn on the personalized shopping avatar; and receiving, from the remote computing device, feedback relating to the one or more matched products worn on the personalized shopping avatar; and upon receiving the feedback, generating an updated interactive presentation of the personalized shopping avatar and one or more modified product recommendations in accordance with the feedback. upon receiving the feedback, generating an updated interactive presentation of the personalized shopping avatar and one or more modified product recommendations in accordance with the feedback Instant Claim 12 Claim 16 of Patent ‘371 wherein the personalized shopping assistant application further includes a personalized product ordering module configured to generate personalized product recommendations tailored to the personalized shopping avatar. wherein the personalized shopping assistant application further includes a personalized product ordering module configured to generate personalized product recommendations. Instant Claim 13 Claim 15 of Patent ‘371 wherein the personalized shopping assistant application is configured to capture anatomical profile data of the user by: wherein the personalized shopping assistant application is configured to capture anatomical profile data of the user by: displaying markings on a display of the mobile computing device to guide the user to stand appropriately to allow an accurate scan of a part of a user’s body with respect to a reference object; and. displaying markings on a display of the mobile automatically obtaining an image of the part of the user’s body when the part is within the markings using the 3D scanner to scan the part of the user’s body automatically obtaining an image of the part of the user's body when the part is within the markings using the 3D scanner to scan the part of the user's body, wherein the image includes multidimensional anatomical data of the scanned part of the user's body; Regarding claim 14, claim 14 is directed to a platform. Claim 14 recites limitations that are parallel in nature to those addressed above for claim 3 which is directed towards a method. Claim 14 is therefore rejected for the same reasons as set forth above for claim 3. Instant Claim 15 Claim 15 of Patent ‘371 transmitting a prompt to a remote computing device at a point of sale, and wherein the prompt includes an indication to provide feedback in connection with the one or more matched products worn on the personalized shopping avatar for the user. transmitting a prompt to a remote computing device associated with a sales associate at a point of sale to provide feedback in connection with the one or more matched products worn on the personalized shopping avatar; Instant Claim 16 Claims 20 and 21 of Patent ‘371 wherein the personalized shopping assistant application further includes a social shopping module and a personalized product fitting module. wherein the personalized shopping assistant application further includes a social shopping module wherein the personalized shopping assistant application further includes a personalized product fitting module. Instant Claim 17 Claims 15 of Patent ‘371 wherein calculating, based on the processed image capture data and with respect to a reference object and the mobile computing device, user body measurement data of the part of the user’s body. calculating, based on the processed image and with respect to the reference object and the mobile Regarding claim 18, claim 18 is directed to a mobile deice. Claim 18 recites limitations that are similar in nature to those addressed above for claim 1 which is directed towards a method. Claim 18 also recites the feature of a touch screen, which is found in claim 22 of patent 371. Claim 18 is therefore rejected for the same reasons as set forth above for claim 1. Instant Claim 19 Claim 23 of Patent ‘371 further comprising registers adapted to analyze one or more user history data and user preference data, and wherein the processor is further configured to receive additional the one or more user history data and user preference data to cause a modification to the personalized user shopping avatar further comprising registers adapted to analyze one or more user history data and user preference data, and wherein the software application running on the mobile device is further configured to receive additional the one or more user history data and user preference data to cause a modification to the personalized user shopping avatar. Instant Claim 20 Claim 24 of Patent ‘371 further comprising a depth sensor configured to provide 3D data that enables enhanced generation of a user anatomical profile for the model of the part of the user anatomy. further comprising a depth sensor configured to provide 3D data that enables enhanced generation of a user anatomical profile for the model of the part of the user anatomy. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Piana, A. (PGP No. US 2014/0108208 A1), in view of Spector, D., et al. (PGP No. US 2014/0270540 A1), and Fonte, T., et al. (PGP No. US 2015/0055086 A1). Claim 1- Piana discloses a method comprising: allow an accurate scan of a part of a user’s body (Piana, see: paragraph [0014] disclosing “scanners 10 owned by the shopper”; and paragraph [0015] disclosing “scanner 10 produces…an image 12 which is an exact replica of the shopper’s shape and body”; and paragraph [0022] disclosing “their body…from the front, back and sides”); obtaining an image of the part of the user’s body using a 3D scanner, wherein the 3D scanner is configured to scan the part of the user’s body (Piana, see: paragraph [0014] disclosing “scanners 10 owned by the shopper”; and paragraph [0015] disclosing “scanner 10 produces…an image 12 which is an exact replica of the shopper’s shape and body”; and paragraph [0017] disclosing “avatar/image 12 which is displayed at the shopper/consumer's terminal 16”; and paragraph [0022] disclosing “their body…from the front, back and sides” ); identify one or more features of the part of the user’s body (Piana, see: paragraph [0005] disclosing “(3D) body scan of the shopper (preferably of the entire body, however, portions of the body can also be imaged)”; and paragraph [0015] disclosing “scanner 10 produces an avatar/image 12 which is an exact replica of the shopper's shape and body size”); and generating a personalized user shopping avatar based on generating a model of the part of the user’s body, wherein the personalized user shopping avatar comprises user body measurement data from the processed image (Piana, see: paragraph [0005] disclosing “(3D) body scan of the shopper (preferably of the entire body, however, portions of the body can also be imaged)”; and paragraph [0015] disclosing “scanner 10 produces an avatar/image 12 which is an exact replica of the shopper's shape and body size”); generating at least one product recommendation based on matching one or more products from one or more product data sources to the personalized user shopping avatar (Piana, see: paragraph [0021] disclosing “can be provided with list or tables of retail brands or retailers [i.e., one or more product data sources]”; and paragraph [0022] disclosing “alternatives products and/or suggestions”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”); interactively presenting, on the display of the mobile computing device and using the personalized shopping assistant application, one or more matched products worn on the personalized user shopping avatar (Piana, see: paragraph [0021] disclosing “shopper can use a smartphone, computer…to visit a website portal” and “retailers participating in a personalized virtual shopping assistant program”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”); receiving, from a remote computing device, feedback relating to the one or more matched products worn on the personalized user shopping avatar (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”); and upon receiving the feedback, automatically generating an updated interactive presentation of the personalized user shopping avatar and one or more modified product recommendations in accordance with the feedback (Piana, see: pargraph [0022] disclosing “the shopper being notified if the product cannot fit. In this way, the shopper might be able to compare a more tightly fitting size with a more loosely fitting size. Preferably, the shopper will be able to selectively display several items, e.g., one after another, on his or her avatar” and “a live chat whereby shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions on size”). Although Piana discloses that a scanner owned by the shopper is used to capture an image and to create an avatar of the shopper, that allows for measurements of the shoppers body and size, Piana does not specifically state display markings on a display of a computing device to guide a user for the scan. Piana does not disclose that there is a processing of the image captured, and does not disclose that the body measurements are calculated from the processed image. Piana does not disclose: displaying markings on a display of a mobile computing device to guide a user to allow an accurate scan of a user’s body with respect to a reference object; processing the image to automatically identify one or more features of the user’s body; user body measurement data calculated from the processing image; Spector, however, does teach: displaying markings on a display of a mobile computing device to guide a user to allow an accurate scan of a user’s body with respect to a reference object (Spector, see: paragraph [0050] teaching “the user is instructed to position a target object (e.g., front or side portions of the user’s body)” and “may further instruct the user to hold the user device at a particular position” and “instruction to position a reference object” and “at a position to the target object” an “capture a digital image including the target object and the reference object”; and paragraph [0057] teaching “instructions may be provided to the user” and “one or more sets of markers so as to indicate end points or boundaries…of the target object”; Also see FIG. 13 depicting an interface with the markers and the marker controls.); processing the image to automatically identify one or more features of the user’s body (Spector, see: paragraph [0054] teaching “a central feature of the target object (e.g., a face of a body) and corresponding outer features (e.g., shoulders) are identified. The features may be identified using image recognition”; and see: paragraph [0086] teaching “Reference markers may be…placed on the image based on similar analysis of the image” and paragraph [0202] teaching “Computer Vision technology may be used…to detect the best overbust and underbust measurements” and “processing, analyzing, and understanding images from the real world in order to produce numerical or symbolic information, used to size contours of the human body”); user body measurement data calculated from the processing image (Spector, see: paragraph [0054] teaching “a central feature of the target object (e.g., a face of a body) and corresponding outer features (e.g., shoulders) are identified. The features may be identified using image recognition or markers set by a user. At sub-operation 631B, distances between the central feature of body and each of the corresponding outer features are measured”; and paragraph [0202] teaching “processing, analyzing, and understanding images from the real world in order to produce numerical…information”). This step of Spector is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Spector to include the features of displaying markings on a display of a mobile computing device to guide a user to allow an accurate scan, processing the image to automatically identify one or more features of the user’s body, and user body measurement data calculated from the processing image, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measurements of a user remotely in order for a user to purchase try-on items remotely (Spector, see: paragraph [0006]). Further, Piana does not disclose: an imaging module attached to or integrated with a back portion of the mobile computing device, Fonte, however, does teach: an imaging module attached to or integrated with a back portion of the mobile computing device (Fonte, see: paragraph [0157] teaching “computer system 20001 configured with a imaging device on the…back 2003 of the computer system a is used to acquire image data of a user”; Also see FIG. 20 of Fonte, depicting the imaging device of a user’s handheld device, where the imaging device is integrated and found on the back portion of the mobile handheld device of the user.) This step of Fonte is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Piana, to include the features of an imaging module attached to or integrated with a back portion of the mobile computing device, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Claim 2- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana discloses wherein processing the image comprises determining multidimensional geometrical data of the scanned part of the user’s body (Piana, see: paragraph [0015] disclosing “3D avatar” and “scanners 10 owned by the shopper” and “scanner 10 produces an avatar/image 12 which is an exact replica”; and see: [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The body scan provides for the construction of an avatar/image…shape of the shopper [i.e., multidimensional geometrical data]”). Claim 3- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana does not disclose: wherein processing the image includes using a machine learning engine to automatically recognize one or more body features. Fonte, however, does teach: wherein processing the image includes using a machine learning engine to automatically recognize one or more body features (Fonte, see: paragraph [0031] teaching “automatically from image data, may be used to provide further information to customize products” and “learning algorithms that enable a product design to be altered to suit a particular user”; and paragraph [0052] teaching “a learning machine or predictor or prognostication machine” and “the user provided image data…to determine user preferences for custom eyewear properties”; and paragraph [0053] teaching “In accordance with another embodiment, a system and method are disclosed for learning from a body of data”). This step of Fonte is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Piana, to include the features of wherein processing the image includes using a machine learning engine to automatically recognize one or more body features, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Claim 4- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana discloses: wherein the feedback is received in response to transmitting a prompt to the remote computing device at a point of sale (Piana, see: paragraph [0009] disclosing “a system and method which allows consumers to utilize personalized, three dimensional images of themselves on…terminal (including terminals at retail store or kiosk) as a model on which they can see computer representations of clothing and accessories”; and paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer”), and wherein the prompt includes an indication to provide feedback in connection with the one or more matched products worn on the personalized user shopping avatar (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”). Claim 5- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana does not disclose: wherein obtaining the image includes using a depth sensor of the imaging module to enable structured light used to provide a 3D scan of the part of the user’s body. Fonte, however, does teach: wherein obtaining the image includes using a depth sensor of the imaging module to enable structured light used to provide a 3D scan of the part of the user’s body (Fonte, see: paragraph [0142] teaching “depth cameras or laser sensors may be used to acquire the image data” and “to estimate depth”; and paragraph [0205] teaching “detect the exact center of the nose due to lighting constraints”; and paragraph [0218] teaching “computer system analyzes the user's image data for lighting”). This step of Fonte is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Piana, to include the features of wherein obtaining the image includes using a depth sensor of the imaging module to enable structured light used to provide a 3D scan of the part of the user’s body, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Claim 6- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana discloses further comprising interactively presenting modifications to at least one product recommendation based on the feedback(Piana, see: pargraph [0022] disclosing “the shopper being notified if the product cannot fit. In this way, the shopper might be able to compare a more tightly fitting size with a more loosely fitting size. Preferably, the shopper will be able to selectively display several items, e.g., one after another, on his or her avatar” and “a live chat whereby shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions on size”). Claim 7- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana does not disclose: further comprising providing a product matching feedback from a social network. Fonte, however, does teach: further comprising providing a product matching feedback from a social network (Fonte, see: paragraph [0313] teaching “user provides access to his image data and anatomic model to another party, such as a friend” and “sent directly to another person through…social networking” and “The other party then adjusts, customizes, and previews eyewear on the original user’s face model” and “saves favorites” and “then sends back images, designs, views…”). This step of Fonte is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Piana, to include the features of further comprising providing a product matching feedback from a social network, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Claim 8- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana discloses: wherein generating the personalized user shopping avatar includes accessing user history data and user preference data and wherein the personalized user shopping avatar includes a predicted user preference based on user history data and user preference data (Piana, see: paragraph [0018] disclosing “Preferably, an ordering system will enable a consumer at his or her terminal 16 to select items of interest to him or her, and through an interface with the retailers 22 or manufacturers 24” and “establish a retrieval list with items of interest he or she might retrieve for more comparative shopping at a later time”; and see: paragraph [0021] disclosing “the shopper could create groups of products much like a play list is created”; and see: paragraph [0023] disclosing “the shopper may make purchases at steps 520 and 522 and/or perform other operations such as set aside an item for easier retrieval during a later shopping session”). Claim 9- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana further comprising generating a virtual simulation of one or more products based on the personalized user shopping avatar (Piana, see: paragraph [0017] disclosing “dress the 3D avatar/image 12 of the shopper” and “For example, a coat size could increase or decrease, and a pant size could lengthen or shorten as well as expand or contract” and “the consumer can ‘see’ a representation of himself or herself with the products on his or her virtual body”; and see: paragraph [0022] disclosing “the avatar can be rotated so that the shopper can see what a selected product would look like on their body (i.e., their avatar) from the front, back and sides”) and based on additional personalized shopping avatars of other users sharing common features with the personalized user shopping avatar (Piana, see: paragraph [0006] disclosing “a service provider could store avatars for a plurality of customers, each of which is created from a 3D body scan the individual shopper”). Claim 10- Piana in view of Spector, and Fonte teach the method of claim 1, as described above. Piana discloses further comprising modifying the personalized user shopping avatar in accordance with changes in one or more of user anatomical information, user anatomy data, user behavior, user history, user preferences, and feedback (Piana, see: pargraph [0022] disclosing “the shopper being notified if the product cannot fit. In this way, the shopper might be able to compare a more tightly fitting size with a more loosely fitting size. Preferably, the shopper will be able to selectively display several items, e.g., one after another, on his or her avatar” and “a live chat whereby shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions on size”). Piana does not disclose: social feedback; Fonte, however, does teach: social feedback (Fonte, see: paragraph [0313] teaching “user provides access to his image data and anatomic model to another party, such as a friend” and “sent directly to another person through…social networking” and “The other party then adjusts, customizes, and previews eyewear on the original user’s face model” and “saves favorites” and “then sends back images, designs, views…”). This step of Fonte is applicable to the method of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Piana, to include the features of social feedback, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Clams 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Piana, A., in view of Fonte, T., et al., Altieri, F. (PGP No. US 2015/0066712 A1), and Spector, D. Claim 11- Piana discloses a platform for personalized shopping, comprising: a server including a profile module, a product module, and a product matching module (Piana, see: paragraph [0012] disclosing “include in a ‘virtual library’ [i.e., a profile module] for the consumer”; and see: paragraph [0013] disclosing “remote server system [i.e., server]”; and see: paragraph [0021] disclosing “the shopper can be provided with list or tables of retail brands or retailers participating [i.e., a product module] in a personalized virtual shopping assistant program. At step 516, the shopper may use their smartphone…to select products (e.g., denim pants, T-shirts, suites, etc.) and/or brands”); and a personalized shopping assistant application running on a mobile computing device, wherein the mobile computing device includes an imaging module, wherein the imaging module includes a 3D scanner configured to scan at least a part of a user’s body and obtain image capture data and to enable anatomical profile data capture of a user to be used by the profile module to generate a personalized shopping avatar for the user (Piana, see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 which is an exact replica of the shopper’s shape and body size” and “have a server 15 (which can be a single server or a server farm of several local or remotely connected servers) that stores the avatars of several shoppers (perhaps hundreds of thousands of shoppers)”; and see: paragraph [0016] disclosing “use terminal(s) 16 to access the avatar/image 12 for online shopping purposes”; and see: paragraph [0020] disclosing “The body scan provides for the construction of an avatar/image…shape of the shopper”; and see: paragraph [0021] disclosing “the shoppers can use a smartphone, computer, PDA or other device to visit a website portal” and “the shopper can be provided with list or tables of retail brands or retailers participating in a personalized virtual shopping assistant program.”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like”); wherein the mobile computing device is communicatively connected to the server to communication image capture data to the server (Piana, see: paragraph [0013] disclosing “remote server system [i.e., server]” and paragraph [0016] disclosing “use terminal(s) 16 to access the avatar/image 12 for online shopping purposes”); wherein the profile module is configured to generate a personalized shopping avatar based at least partially on image capture data from the mobile computing device (Piana, see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers”) by: processing the image capture data to automatically identify one or more features of the part of the user’s body (Piana, see paragraph: [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 [i.e., obtaining an image] which is an exact replica of the shopper’s shape and body size [i.e., of the user’s body]” and see: paragraph [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The body scan provides for the construction of an avatar/image…shape of the shopper”); and generating the personalized shopping avatar, wherein the personalized shopping avatar comprises a model of the part of the user’s body and user body measurement data from the processed image capture data (Piana, see: paragraph [0005] disclosing “a personalized virtual shopping assistant that is an avatar of the shopper”; and see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 which is an exact replica [i.e., generating a model of the user’s body] of the shopper’s shape and body size”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) [i.e., personalized avatar] from the front, back and sides [i.e., part of the user’s body]); wherein the product matching module is configured to match one or more products to the personalized shopping avatar by: matching one or more products from consolidated product data to the personalized shopping avatar (Piana, see: paragraph [0005] disclosing “a personalized virtual shopping assistant that is an avatar of the shopper”; and see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 which is an exact replica of the shopper’s shape and body size”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) [i.e., personalized avatar] from the front, back and sides”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”), and interactively presenting, on a display of the mobile computing device and using the personalized shopping assistant application, one or more matched products worn on the personalized shopping avatar (Piana, see: paragraph [0023] disclosing “While viewing the computer representation [i.e., interactively presenting] displayed products on their own personal avatar which is matched the shopper may make purchases”); receiving feedback relating to the one or more matched products worn on the personalized shopping avatar (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”); and upon receiving the feedback, generating an updated interactive presentation of the personalized shopping avatar and one or more modified product recommendations in accordance with the feedback (Piana, see: pargraph [0022] disclosing “the shopper being notified if the product cannot fit. In this way, the shopper might be able to compare a more tightly fitting size with a more loosely fitting size. Preferably, the shopper will be able to selectively display several items, e.g., one after another, on his or her avatar” and “a live chat whereby shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions on size”). Piana does not disclose: an imaging module attached to or integrated with a back portion of the mobile computing device, Fonte, however, does teach: an imaging module attached to or integrated with a back portion of the mobile computing device (Fonte, see: paragraph [0157] teaching “computer system 20001 configured with a imaging device on the…back 2003 of the computer system a is used to acquire image data of a user”; Also see FIG. 20 of Fonte, depicting the imaging device of a user’s handheld device, where the imaging device is integrated and found on the back portion of the mobile handheld device of the user.) This step of Fonte is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Piana, to include the features of an imaging module attached to or integrated with a back portion of the mobile computing device, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Further, although Piana does disclose a server (Piana, paragraph [0012]), and does disclose a mobile user computing device that runs a shopping assistant application (Piana, see: paragraph [0021]), Piana does not explicitly disclose that the server is a cloud based server. Piana does not disclose: a cloud based server; but Altieri, however, does teach: a cloud based server (Altieri, see: paragraph [0104] teaching “In another embodiment, the QIE 203 engine is located on a main computer client-server system (see FIG. 5) working over the internet 220, intranet, cloud”). This step of Altieri is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to identifying merchandise remotely. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Piana to include the feature of a cloud based server, as taught by Altieri. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana to improve the ways of tracking merchandise that would be presented virtually to a customer (Altieri, see: paragraph [0005]). Although Piana discloses that a scanner owned by the shopper is used to capture an image and to create an avatar of the shopper, that allows for measurements of the shoppers body and size, Piana does not specifically state that the body measurements are calculated from the processed image. Piana does not disclose: processing the image to automatically identify one or more features of the user’s body; user body measurement data calculated from the processing image; Spector, however, does teach: processing the image to automatically identify one or more features of the user’s body (Spector, see: paragraph [0054] teaching “a central feature of the target object (e.g., a face of a body) and corresponding outer features (e.g., shoulders) are identified. The features may be identified using image recognition”; and see: paragraph [0086] teaching “Reference markers may be…placed on the image based on similar analysis of the image” and paragraph [0202] teaching “Computer Vision technology may be used…to detect the best overbust and underbust measurements” and “processing, analyzing, and understanding images from the real world in order to produce numerical or symbolic information, used to size contours of the human body”); user body measurement data calculated from the processing image (Spector, see: paragraph [0054] teaching “a central feature of the target object (e.g., a face of a body) and corresponding outer features (e.g., shoulders) are identified. The features may be identified using image recognition or markers set by a user. At sub-operation 631B, distances between the central feature of body and each of the corresponding outer features are measured”; and paragraph [0202] teaching “processing, analyzing, and understanding images from the real world in order to produce numerical…information”). This step of Spector is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Spector to include the features of displaying markings on a display of a mobile computing device to guide a user to allow an accurate scan, processing the image to automatically identify one or more features of the user’s body, and user body measurement data calculated from the processing image, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measurements of a user remotely in order for a user to purchase try-on items remotely (Spector, see: paragraph [0006]). Claim 12- Piana in view of Fonte, Altieri, and Spector teach the platform of claim 11, as described above. Piana further discloses: wherein the personalized shopping assistant application further includes a personalized product ordering module configured to generate personalized product recommendations tailored to the personalized shopping avatar(Piana, see: paragraph [0022] disclosing “shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions on size, fabric, cut, and other information which may be of interest.”; and see: paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases at steps 520 and 522” and “he or she can decide to place an order for the item”; and see: paragraph [0024] disclosing “the shopper/customer could have a body scan prepared at a retail outlet for a particular product manufacturer (e.g., ArmaniTM, DiselTM,…etc.)” and “retailer might be able to achieve make to order [i.e., personalized product ordering] benefits using this inventive system”). Claim 13- Piana in view of Fonte, Altieri, and Spector teach platform of claim 11, wherein the personalized shopping assistant application is configured to capture anatomical profile data of the user by: Although Piana discloses using the 3D scanner to scan the part of the user’s body (Piana, see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 which is an exact replica of the shopper’s shape and body size”), Piana does not disclose: displaying markings on a display of the mobile computing device to guide the user to stand appropriately to allow an accurate scan of a part of a user’s body with respect to a reference object; and automatically obtaining an image of the part of the user’s body when the part is within the markings; Spector, however, does teach: displaying markings on a display of the mobile user computing device to guide the user to stand appropriately to allow an accurate scan of a part of a user’s body with respect to a reference object (Spector, see: paragraph [0050] teaching “the instructions may instruct the user to stand in a particular position” and “The instructions may further instruct the user to hold the user device at a particular position”; and see: paragraph [0051] teaching “the user is instructed to position a target object (e.g., front or side portions of the user’s body)” and “the user is instructed to position a reference object” and “at a position to the target object” and “the user is instructed to actuate the camera in order to capture a digital image including the target object [i.e., a part of the user’s body] and the reference object”; and see: paragraph [0057] teaching “additional instructions may be provided to the user to select locations of one or more sets of markers so as to indicate end points or boundaries of a section of the target object [i.e., a part of the user’s body]”; Also see: paragraph [0181] teaching “FIG. 13 depicts a user interface ("UI") 1300. The UI depicted in FIG. 13 may take various configurations and may perform various functions within the scope and spirit of the disclosure. For example, the disclosed UI 1300 may include a marker 1315 and a marker control 1325. The marker control 1325 may be activated by a user's finger when touched on the display [i.e., display of the mobile user computing device]”; Also see: FIG. 13 “iPhone” and el. 1315 and 1325 “marker” and FIG. 14) (Examiner’s note: The Examiner is interpreting that when the markers are visible on the user’s device, the user may be guided by the markers to stand appropriately to capture the image of the parts of the body, such as what is demonstrated in FIGS. 13 and 14 of Spector.); automatically obtaining an image of the part of the user’s body when the part is within the markings (Spector, see: paragraph [0051] teaching “the user is instructed to position a target object (e.g., front or side portions of the user’s body)” and “the user is instructed to position a reference object” and “at a position to the target object” and “the user is instructed to actuate the camera in order to capture a digital image including the target object [i.e., a part of the user’s body]”; and see: paragraph [0057] teaching “additional instructions may be provided to the user to select locations of one or more sets of markers so as to indicate end points or boundaries of a section of the target object [i.e., a part of the user’s body]”; Also see: FIG. 13). This step of Spector is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Spector to include the features of displaying markings on a display of the mobile computing device to guide the user to stand appropriately to allow an accurate scan of a part of a user’s body with respect to a reference object, and automatically obtaining an image of the part of the user’s body when the part is within the markings, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measurements of a user remotely in order for a user to purchase try-on items remotely (Spector, see: paragraph [0006]). Claim 14- Piana in view of Fonte, Altieri, and Spector teach the platform of claim 13, wherein the personalized shopping assistant application is further configured to capture anatomical profile data of the user by: Piana does not disclose: using a machine learning engine to identify one or more features of the part of the user’s body from the image. Spector, however, does teach: using a machine learning engine to identify one or more features of the part of the user’s body from the image (Spector, see: paragraph [0139] teaching “captures one or more images” and “prominent feature points may be extracted” and “computer vision algorithms may be used for such feature extraction” and paragraph [0141]). This step of Spector is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Spector to include the features of using a machine learning engine to identify one or more features of the part of the user’s body from the image, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measurements of a user remotely in order for a user to purchase try-on items remotely (Spector, see: paragraph [0006]). Claim 15- Piana in view of Fonte, Altieri, and Spector teach the platform of claim 11, as described above. Piana discloses: wherein the feedback is received in response to transmitting a prompt to a remote computing device at a point of sale, and wherein the prompt includes an indication to provide feedback in connection with the one or more matched products worn on the personalized shopping avatar for the user (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”). Claim 16- Piana in view of Fonte, Altieri, and Spector teach the platform of claim 11, as described above. Piana discloses wherein the personalized shopping assistant application further includes a social shopping module and a personalized product fitting module (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”). Claim 17- Piana in view of Fonte, Altieri, and Spector teach the platform of claim 11, as described above. Piana does not disclose: wherein calculating, based on the processed image capture data and with respect to a reference object and the mobile computing device, user body measurement data of the part of the user’s body. Spector, however, does teach: wherein calculating, based on the processed image capture data and with respect to a reference object and the mobile computing device, user body measurement data of the part of the user’s body (Spector, see: paragraph [0051] teaching “capture a digital image including the target object [i.e., a part of the user’s body]”; and see: paragraph [0067] teaching “an estimated actual length between the end points of the target object’s section is determined by calculating the product of the virtual distance of the target object’s [i.e., the part of the user’s body] section and the actual distance of the reference object [i.e., with respect to the reference object], and then dividing that product by the virtual distance of the reference object to obtain the estimated actual length [i.e., body measurement data]”). This step of Spector is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Spector to include the features of wherein calculating, based on the processed image capture data and with respect to a reference object and the mobile computing device, user body measurement data of the part of the user’s body, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measurements of a user remotely in order for a user to purchase try-on items remotely (Spector, see: paragraph [0006]). Clams 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Piana, A., in view of Fonte, T., et al., and Spector, D. Claim 18- Piana discloses a mobile device comprising: receive a user’s input (Piana, see: paragraph [0015] disclosing “avatar/image 12 could be supplemented with additional information entered by the shopper using an editor”); an imaging module, the imaging module including a 3D image scanner configured to capture one or more images of at least a part of a user anatomy (Piana, see: paragraph [0015] disclosing “3D avatar” and “scanners 10 owned by the shopper” and “scanner 10 produces an avatar/image 12 which is an exact replica”; and see: [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The body scan provides for the construction of an avatar/image…shape of the shopper” and “the scanned image…will be stored at step S12 for later use in online shopping”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) from the front, back and sides [i.e., part of the user anatomy]”); a processor having registers (Piana, see: paragraph [0009] disclosing “a system and method which allows consumers to utilize personalized, three dimensional images of themselves on their computer…or terminal (including terminals at retail store or kiosk) [i.e., processor] as a model on which they can see computer representations of clothing and accessories”) adapted to: automatically obtain an image of the part of the user anatomy using the 3D image scanner to scan the part of the user anatomy (Piana, see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers” and “The scanner 10 produces an avatar/image 12 which is an exact replica of the shopper’s shape and body size [i.e., of the user’s body]”; and see: paragraph [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The body scan provides for the construction of an avatar/image…shape of the shopper [i.e., multidimensional geometrical data]” and “the scanned image…will be stored at step S12 for later use in online shopping”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) from the front, back and sides”); process the image to automatically identify one or more features of the part of the user anatomy (Piana, see: paragraph [0015] disclosing “have a 3D avatar personalized to each of a plurality of shoppers/consumers”; and see: paragraph [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) from the front, back and sides”; and see paragraph [0017]); generate a personalized user shopping avatar that includes a model of the part of the user anatomy, (Piana, see: paragraph [0020] disclosing “a 3D body scanner and creates a 3D body scan of themselves (avatar type) at step S10” and “The and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) from the front, back and sides”); and match the personalized user shopping avatar with external product data to generate a personalized product offering (Piana, see: paragraph [0023] disclosing “While viewing the computer representation displayed products on their own personal avatar which is matched so that the shopper may make purchases”); interactively present, on the display of the mobile device, the personalized product offering being worn on the personalized user shopping avatar (Piana, see: paragraph [0021] disclosing “the shoppers can use a smartphone, computer, PDA or other device to visit a website portal”; and see: paragraph [0023] “While viewing the computer representation [i.e., interactively presenting] displayed products on their own personal avatar”); receive feedback relating to the personalized product offering worn on the personalized user shopping avatar (Piana, see: paragraph [0022] disclosing “a live chat whereby shoppers could be provided with input from a retailer” and “avatar can be rotated so that the shopper can see what a selected product would look like on their body”; and paragraph [0023] disclosing “displayed products on their own personal avatar which is matched to their body shape, the shopper may make purchases”); and generate an updated interactive presentation of the personalized user shopping avatar (Piana, see: paragraph [0022] disclosing “shoppers could be provided with input from a retailer or manufacturer concerning alternative products and/or suggestions [i.e., recommendations] on size, fabric, cut, and other information which may be of interest” and “products will be fitted onto the avatar of the shopper” and “the avatar can be rotated so that the shopper can see what a selected product would look like on their body (i.e., their avatar) from the front, back, and sides”). Piana does not disclose: a touch screen; imaging module that is attached to or integrated with a back portion of the mobile device, modify one or more aspects of the personalized product offering based on the feed; upon receiving feedback, generate a modified personalized product offering in accordance with the feed; Fonte, however, does teach: a touch screen (Fonte, see: paragraph [0038] teaching “Input devices include touchscreens”); imaging module that is attached to or integrated with a back portion of the mobile device (Fonte, see: paragraph [0157] teaching “computer system 20001 configured with a imaging device on the…back 2003 of the computer system a is used to acquire image data of a user”; Also see FIG. 20 of Fonte, depicting the imaging device of a user’s handheld device, where the imaging device is integrated and found on the back portion of the mobile handheld device of the user.), modify one or more aspects of the personalized product offering based on the feedback (Fonte, see: paragraph [0058] teaching “The system and method further include the third party to provide feedback and updated designs to the user” and see paragraph [0286]); upon receiving feedback, generate a modified personalized product offering in accordance with the feedback (Fonte, see: paragraph [0286] teaching “the computer system may ask, "Is the eyewear currently too wide or narrow on your face?" or "Is the eyewear currently too thick or thin?" or "Do you prefer larger or smaller styles?" The user would be able to select an option or answer the prompts through the interface and then subsequently observe an adjustment to the eyewear in response. When coupled with machine learning techniques described herein, this could represent a powerful means to provide a personalized and custom recommendation, while allowing slight adaptation based on live feedback from the user”). This step of Fonte is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Piana, to include the features of a touch screen, imaging module that is attached to or integrated with a back portion of the mobile device, modify one or more aspects of the personalized product offering based on the feedback, and upon receiving feedback, generate a modified personalized product offering in accordance with the feedback, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Further, Piana does not disclose: display markings on a display of the mobile device to guide a user to stand appropriately to allow an accurate scan of the part of the user anatomy with respect to a reference object; wherein the model comprises body measurement data calculated from the processed image, Spector, however, does teach: display markings on a display of the mobile device to guide a user to stand appropriately to allow an accurate scan of the part of the user anatomy with respect to a reference object (Spector, see: paragraph [0050] teaching “the instructions may instruct the user to stand in a particular position” and “The instructions may further instruct the user to hold the user device at a particular position”; and see: paragraph [0051] teaching “the user is instructed to position a target object (e.g., front or side portions of the user’s body)” and “the user is instructed to position a reference object” and “at a position to the target object” and “the user is instructed to actuate the camera in order to capture a digital image including the target object [i.e., a part of the user’s body] and the reference object”; and see: paragraph [0057] teaching “additional instructions may be provided to the user to select locations of one or more sets of markers so as to indicate end points or boundaries of a section of the target object [i.e., a part of the user’s body]”; Also see: paragraph [0181] teaching “FIG. 13 depicts a user interface ("UI") 1300. The UI depicted in FIG. 13 may take various configurations and may perform various functions within the scope and spirit of the disclosure. For example, the disclosed UI 1300 may include a marker 1315 and a marker control 1325. The marker control 1325 may be activated by a user's finger when touched on the display [i.e., display of the mobile user computing device]”; Also see: FIG. 13 “iPhone” and el. 1315 and 1325 “marker” and FIG. 14) (Examiner’s note: The Examiner is interpreting that when the markers are visible on the user’s device, the user may be guided by the markers to stand appropriately to capture the image of the parts of the body, such as what is demonstrated in FIGS. 13 and 14 of Spector.); wherein the model comprises body measurement data calculated from the processed image (Spector, see: paragraph [0051] teaching “the user is instructed to actuate the camera in order to capture a digital image including the target object [i.e., a part of the user’s body]”; and see: paragraph [0067] teaching “an estimated actual length between the end points of the target object’s section is determined by calculating the product of the virtual distance of the target object’s [i.e., the part of the user’s body] section and the actual distance of the reference object [i.e., with respect to the reference object], and then dividing that product by the virtual distance of the reference object to obtain the estimated actual length [i.e., body measurement data]”). This step of Spector is applicable to the product of manufacture of Piana, as they both share characteristics and capabilities, namely, they are directed to the measurements of a user’s size and shape. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the product of manufacture of Spector to include the features of display markings on a display of the mobile device to guide a user to stand appropriately to allow an accurate scan of the part of the user anatomy with respect to a reference object, and wherein the model comprises body measurement data calculated from the processed image, as taught by Spector. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the combination of Piana, to more accurately obtain the body measures of a user remotely (Spector, see: paragraph [0006]). Claim 19- Piana in view of Fonte, and Spector teach the mobile device of claim 18, as described above. Piana discloses further comprising registers adapted to analyze one or more user history data and user preference data, and wherein the processor is further configured to receive additional the one or more user history data and user preference data to cause a modification to the personalized user shopping avatar (Piana, see: paragraph [0018] disclosing “his or her terminal 16 to select items of interest to him or her, and through an interface with the retailers 22 or manufacturers 24” and “establish a retrieval list with items of interest he or she might retrieve for more comparative shopping at a later time [i.e., accesses user history]”; and see: paragraph [0021] disclosing “the shopper could create groups of products much like a play list is created”; and see: paragraph [0023] disclosing “the shopper may make purchases at steps 520 and 522 and/or perform other operations such as set aside an item for easier retrieval during a later shopping session [i.e., analyze user history data]”).. Claim 20- Piana in view of Fonte, and Spector teach the mobile device of claim 18, as described above. Piana discloses further comprising a sensor to provide 3D data that enables enhanced generation of a user anatomical profile for the model of the part of the user anatomy (Piana, see: paragraph [0015] disclosing “3D avatar personalized to each of a plurality of shoppers/consumers/customers” and “scanner 10 produces an avatar/image 12 which is an exact replica of the shopper’s shape and body”; and see: paragraph [0022] disclosing “avatar can be rotated so that the shopper can see what selected product would look like on their body (i.e., their avatar) from the front, back and sides [i.e., part of the user anatomy]”). Piana does not disclose: a depth sensor; Fonte, however, does teach: a depth sensor (Fonte, see: paragraph [0038] teaching “images acquired with depth cameras” and paragraph [0142] teaching “depth cameras or sensors may be used to acquire the image data”). This step of Fonte is applicable to the system of Piana, as they both share characteristics and capabilities, namely, they are directed to displaying products that are wearable on a user’s digital model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Piana, to include the feature of a depth sensor, as taught by Fonte. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify the reference of Fonte to improve models of anatomic metrics of a user to better inform the user when trying on products virtually (Fonte, see: paragraph [0030]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Non-patent literature (NPL) document, titled This Startup is Turning the Human Body Into a Next Gen Design Platform, published on wired.com (2014), describes techniques to utilize body image data to generate and create 3D avatars for users, in order to customize products that can then be visualized on the user’s avatar. Wu, et al. (PGP No. US 2017/0345089 A1), describes frameworks and methodologies configured to enable generation and utilization of three-dimensional body scan data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEY PRESTON whose telephone number is (571)272-4399. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Smith can be reached at 571-272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASHLEY D PRESTON/Primary Examiner, Art Unit 3688
Read full office action

Prosecution Timeline

Sep 19, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591916
VIRTUAL SPACE CHANGING APPARATUS, VIRTUAL SPACE CHANGING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586116
PRODUCT RECOMMENDATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586117
METHODS AND SYSTEM FOR AUTOMATIC POPULATION OF ITEM RECOMMENDATIONS IN RETAIL WEBSITE IN RESPONSE TO ITEM SELECTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579567
METHOD, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR AUTOMATIC CREATION OF LISTS OF ITEMS ORGANIZED AROUND CO-OCCURRENCES
2y 5m to grant Granted Mar 17, 2026
Patent 12482031
SYSTEM AND METHOD FOR DETERMINING SHOPPING FACILITIES AVAILABLE FOR ORDER PICK UP
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
68%
With Interview (+25.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month