DETAILED ACTION
Status of Claims
The following is a final office action in response to the communication filed 8/18/2025.
Claims 1, 8 and 15 have been amended.
Claims 1-20 are currently pending and have been examined.
Information Disclosure Statement
Information Disclosure Statement received 6/17/2025 has been reviewed and considered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s amendments and associated arguments, filed 8/18/25, with respect to the rejection of claims 15-20 under 35 U.S.C. §101 have been considered but they are not persuasive. Applicant argues that the specification disclaims transitory signals. Examiner respectfully disagrees. Per paragraph 0042 of the specification, a “computer readable storage medium” is “not to be construed as a storge in the form of transitory signals per se.” However, the claim recites “a computer program product,” not a computer readable storage medium.” The specification does not directly establish that the recited computer program product is not to be construed as a storage in the form of transitory signals per se. Examiner suggests the claims be amended to recite a “computer readable storage medium.”
Applicant’s amendments and associated arguments, filed 8/18/25, with respect to the rejection of claims 1-20 under 35 U.S.C. §101 have been considered but they are not persuasive.
Applicant argues that the operations set forth in the claims cannot, as a practical matter, be performed entirely within the human's mind, even if aided by pen and paper. For example, Applicant respectfully submits that the human mind is incapable of
"generating a virtual model for the vendor for the item, wherein the virtual model includes the image and the fulfillment information for the item within an augmented reality display." More specifically, Applicant respectfully submits that the human mind is incapable of including an image and the fulfillment information for the item within an augmented reality. Examiner initial notes that, under broadest reasonable interpretation, the model itself is merely an image presented in conjunction with fulfillment information, which a human could generate using pencil and paper. The characterization of the model as being “virtual” and the further recitation that the presentation of information is “within an augmented reality display” are not identified as abstract. These two features are identified as additional elements and evaluated in step 2A prong 2 (see additional analysis regarding these additional elements in the response to the argument below).
Applicant further argues that the claims features are integrated into a practical application. For example, Applicant argues that claim features are integrated into the practical application of an e-commerce system that correlates consumer items with commercial vendors. Applicant further argues that claim 1 integrates its features into a practical application of an item and vendor correlation system at least by providing improvements to existing technology. Examiner respectfully disagrees. First, examiner notes that showrooming is an abstract problem, rather than a technical problem rooted in technology. Second, correlating consumer items with commercial vendors is also abstract and can be performed in the human mind without the aid of technology, and claiming the improved speed or efficiency inherent with applying the abstract idea (aforementioned correlation) on a computer" does not provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). See MPEP 2106.05(f). Furthermore, the characterization of the model as “virtual” amounts to merely indicating a field of use or technological environment in which to apply a judicial exception, and the display on an “augmented reality display” merely represents post-solution activity. Examiner suggests applicant amend the claims to characterize the construct of the augmented reality display and the relationship between the image and the fulfillment information (e.g. using overlays, specifically identified regions of the image in which to present information, interactive elements associated with the fulfillment information, etc.) which my lend towards subject matter eligibility.
Therefore, the rejections of the claims under 35 USC 101 has been upheld.
Applicant’s amendments and associated arguments, filed 8/18/25, with respect to the rejection of the under 35 U.S.C. §102) have been considered but are but are moot because the arguments do not apply to all of the references being used in the current rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 15-20 are directed to a computer program product comprising one or more computer-readable storage media. The specification indicates that “a computer program product embodiment ("CPP embodiment" or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called "mediums") collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim” and therefore, in view of the ordinary and customary meaning of computer readable media and in accordance with the broadest reasonable interpretation of the claim, said medium could be directed towards a transitory propagating signal per se and considered to be non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. 101, Aug 24, 2009, p. 2.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1 of the Subject Matter Eligibility Test entails considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter.
Claims 1-14 are directed to a method (process), a system (machine or manufacture), respectively. As such, the claims are directed to statutory categories of invention.
Claims 15-20 recite a computer program product, which as indicated above is not a statutory category of invention. However, claims 15-20 will be evaluated in conjunction with claims 1-14 in the interest of compact prosecution.
If the claim recites a statutory category of invention, the claim requires further analysis in Step 2A. Step 2A of the Subject Matter Eligibility Test is a two-prong inquiry. In Prong One, examiners evaluate whether the claim recites a judicial exception.
Claim 1 recites abstract limitations, including those identified in bold below:
1. A computer-implemented method for correlating consumer items and commercial vendors, the computer-implemented method comprising:
capturing an image of an item and a surrounding area;
identifying the item in the image and determining a vendor for the item by recognizing the surrounding area in the image and associating the surrounding area with the vendor for the item;
obtaining fulfillment information for the item from a vendor database; and
generating a virtual model for the vendor for the item, wherein the virtual model includes the image and the fulfillment information for the item within an augmented reality display.
These limitations, as drafted, are a process that, under its broadest reasonable interpretation, cover performance of the limitations in the mind, or by a human using pen and paper, and therefore recite mental processes. More specifically, other than reciting that the method is computer-implemented nothing in the claim element precludes the aforementioned (bolded) steps from practically being performed in the human mind, or by a human using pen and paper. The mere recitation of a generic computer does not take the claim out of the mental process grouping. Thus, the claim recites an abstract idea.
Claims 8 and 15 recite analogous abstract limitations and therefore recite an abstract idea for the same reasons as those presented above with respect to claim 1.
If the claim recites a judicial exception in step 2A Prong One , the claim requires further analysis in step 2A Prong Two. In step 2A Prong Two, examiners evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception.
The claim recites the additional elements that are underlined below:
1. A computer-implemented method for correlating consumer items and commercial vendors, the computer-implemented method comprising:
capturing an image of an item and a surrounding area;
identifying the item in the image and determining a vendor for the item by recognizing the surrounding area in the image and associating the surrounding area with the vendor for the item;
obtaining fulfillment information for the item from a vendor database; and
generating a virtual model for the vendor for the item, wherein the virtual model includes the image and the fulfillment information for the item within an augmented reality display.
8. A computer system for correlating consumer items and commercial vendors, the computer system comprising: one or more processors, one or more computer-readable memories, and one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories…
15. A computer program product for correlating consumer items and commercial vendors, the computer program product comprising: one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media …
The implementation by the computer and the recited elements of a computer system (processors, memories, etc.) is/are recited at a high level of generality, and, as applied, are a tool used in its ordinary capacity to perform the abstract idea, and therefore amount to “apply it.”
The characterization of the model as virtual amounts to merely indicating a field of use or technological environment in which to apply a judicial exception and cannot integrate the judicial exception into a practical application (see MPEP 2106.05(h)).
The capture of an image is recited at a high level of generality (i.e. as a general means of gathering information for use in the following steps), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Examiner notes that no mechanism for capturing said image is recited in the claim(s).
The display of the model within an augmented reality display merely amounts to post-solution activity when recited at this level of breadth. Examiner notes that there are no limitations directed to the particulars of the “augmented reality” features of the display and as such the display of information itself is generically recited.
Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
If the additional elements do not integrate the exception into a practical application in step 2A Prong Two, then the claim is directed to the recited judicial exception, and requires further analysis under Step 2B to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself).
As mentioned above, the implementation by the computer and the recited elements of a computer system (processors, memories, etc.) is/are recited at a high level of generality, and, as applied, are a tool used in its ordinary capacity to perform the abstract idea, and therefore amount to “apply it.” Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit).
As mentioned above, the characterization of the model as virtual amounts to merely indicating a field of use or technological environment in which to apply a judicial exception, which does not amount to significantly more than the exception itself. (see MPEP 2106.05(h)).
As indicated above, the capture of an image amounts to insignificant extra-solution activity. The specification of the application, for example in in paragraph 0033, indicates that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art."). MPEP 2106.05(d)(I)(2).
As indicated above, the display of the model within an augmented reality display merely amounts to post-solution activity. MPEP 2106.05(d)(II), and the cases cited therein, including in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function). In addition, see Arrasvuori (20080071559 A1), which in paragraph 0073 discloses the generic nature of augmented reality data being presented in a generic display.
Thus, even when viewed as an ordered combination, nothing in the claims add significantly more (i.e. an inventive concept) to the abstract idea.
Dependent claims 2-6, 9-13 and 16-20 recite abstract limitations that further characterize the aforementioned abstract idea (receiving requests/location information, modifying requests, associating information, making predictions, monitoring interactions, updating requests, obtaining profiles, displaying information, etc.) and additional elements including (i) receiving and transmitting functions (i.e. acquiring information from a device, transmitting information to a server), (ii) the application of machine learning model, (iii) storage of information). With respect to element (i), The Symantec, TLI, OIP Techs. and buySAFE court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection or receipt of data over a network is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is here). With respect to element (ii), the machine learning model is recited at a high level of generality and thus amounts to mere instructions to apply an exception using a generic computer component. With respect to element (iii), use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-10, 12-17 and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Calman et al. (US 20120229657).
With respect to claim 1, Calman teaches:
A computer-implemented method for correlating consumer items and commercial vendors (Calman [0024] Referring now to FIG. 1, a general process flow 100 is provided for providing relationship information between one or more individuals associated with a user and information associated with products, locations, businesses, etc., identified in a captured image. ), the computer-implemented method comprising:
capturing an image of an item and a surrounding area (Calman Fig. 1, [0024] as represented by block 110, the apparatus is configured to receive information associated with an image. In some embodiments, the image is captured by a mobile device operated by a user; Calman Fig. 3 [0088] Mobile device field of view 340 includes the field of view of mobile device 312. In some embodiments, mobile device field of view 340 may be the field of view of the mobile device's digital camera functionality and in other embodiments, mobile device field of view 340 may be the field of view of the mobile device's digital video recorder functionality. In this embodiment, Object 342, Object 344 and Object 346 are located in mobile device field of view 340. As explained earlier, an object may be a product, business, location, etc. In some embodiments, mobile device user 310 uses mobile device 312 and AR Apparatus 330 to determine if objects 342-346 are related to one or more identified individuals. However, in other embodiments, objects 342-346 may be any object (e.g., product, location, business, etc.) that exists within mobile device field of view 340. Mobile device field of view 340 is not limited to this embodiment including objects 342-346. Mobile device field of view 340 may include an infinite number of objects, where each object may take on an infinite number of forms; Calman [0118] Mobile device 312 includes user input device 440A and display 434. As shown in FIG. 7, display 434 is displaying an image of mobile device field of view 340. In this embodiment of the invention, mobile device 312 is capturing a real-time video stream of mobile device field of view 340..);
identifying the item in the image and determining a vendor for the item by recognizing the surrounding area in the image and associating the surrounding area with the vendor for the item (Calman 0028] Further regarding block 110, the phrase "information associated with an image" could be any information associated with an image. In some embodiments, the image itself may be information associated with an image. In some embodiments, metadata, which could be decoded into the image or stored elsewhere, could be information associated with an image. In some embodiments, information associated with an image could be the results of any analysis of the image (e.g., image comparison analysis, pattern recognition analysis or image recognition analysis). In another embodiment of the invention, information associated with an image could by the output of any modeling or composite imaging processes that are based in part on the image. In yet another embodiment, information associated with an image could be information concerning the location of an object depicted in an image; Calman [0032] Regarding block 110, after receiving the image, in some embodiments, the apparatus may also identify a product, business, location, etc. (hereinafter referred to as an "object") that is depicted in the image. The phrase "depicts an object" refers to an image that provides any type of representation of an object. For example, in some embodiments, an image may depict an object if the object is visible anywhere in the image. The object may be in the foreground of the image or the object may be in the background of the image. Furthermore, the entire object may be visible in the image or only a portion of the object may be visible in the image; Calmnan [0033] 0033] In some embodiments of the invention where the object is a product, the identifying data may include: the size, shape or color of a product's packaging; a product's logo; the bar code information associated with a product; the ratio of the size of one feature of a product or its packaging to another feature; a product's physical location; or the appearance of a product itself (as opposed to its packaging).; Calman 0034] For a business, the identifying data may be any type of data that would identify a business. In some embodiments, the identifying data may include: the logo of the business, the decor of the business, one or more trademarks associated with the business, etc. For a location, the identifying data is any type of data that would identify a location, e.g., a street sign, a famous landmark, a street number that is visible on a local business or a home, a positioning system device signal that is communicated from the mobile device (e.g., global positioning system (GPS) signal), etc.; see also [0036]-[0038] );
obtaining fulfillment information for the item from a vendor database (Calman [0078] In some embodiments, the mobile device and/or the server access one or more databases or datastores (not shown) to search for and/or retrieve information related to the object and/or marker; Calman [0089] In various embodiments, information associated with or related to one or more objects that is retrieved for presentation to a user via the mobile device may be permanently or semi-permanently associated with the object. In other words, the object may be "tagged" with the information. In some embodiments, a location pointer is associated with an object after information is retrieved regarding the object. In this regard, subsequent mobile devices capturing the object for recognition may retrieve the associated information, tags and/or pointers in order to more quickly retrieve information regarding the object… In some embodiments, the information gathered through the recognition and information retrieval process may be posted by the user in association with the object. Such tags and/or postings may be stored in a predetermined memory and/or database for ease of searching and retrieval); and
generating a virtual model for the vendor for the item, wherein the virtual model includes the image and the fulfillment information for the item within an augmented reality display (Calman [0024] In addition, as represented in block 140, the apparatus is configured to present, via the mobile device of the user, information associated with the one or more relationships. An apparatus may be able to execute each of the four blocks of FIG. 1 dynamically in real-time. In some embodiments, "real-time" means instantly or immediately upon a user capturing the image or a user executing an AR function, while in other embodiments "real-time" may mean a short delay of a few seconds (e.g., thirty seconds) or a few minutes (e.g., two minutes); Calman [0052] An indicator as described herein may be identified by an object recognition application or an AR application. An indicator may be any type of indicator that is a distinguishing feature that can be interpreted by the object recognition application or the AR application to identify objects or relationships; [0053] In some embodiments, the indicator is "selectable" and a user may "select" the indicator and retrieve information related to the object or the individual related to the object, including how the individual is related to the object. The information may include any desired information concerning the object or the individual and may range from basic information to greatly detailed information. Alternatively, all or a portion of the indicator may include a hyperlink; [0055] In some embodiments, the indicator is not interactive and simply provides information to the user by displaying information on the display of a mobile device. For example, in some embodiments, the indicator may merely identify an object, just identify the object's name/title, or present brief information about the object, etc., rather than provide extensive detail that requires interaction with the indicator; Calman [0089] In various embodiments, information associated with or related to one or more objects is retrieved for presentation to a user via the mobile device may [see also [0093] disclosing AR application may be local or remote to the image capture device; Fig. 5-8 for characterization of augmented reality display]).
With respect to claim 2, Calman teaches the limitations of claim 1 and further discloses:
receiving a request for the item from a user based on the virtual model for the vendor for the item; modifying the request for the item by adding an indication of the vendor; and transmitting a modified request for the item to a server (Calman [0080] In some embodiments, the indicator may provide an Internet hyperlink to enable the user to obtain further information about the product or enable the user to purchase the product by directing the user to a website where product is offered for sale; Calman [0081] in some embodiments, the user then may purchase the product from the website of the store where the user is currently located, pay for the product online, and then take the product from the store.; Calman [0113] In this way, the intelligent software may automatically order the product after the user indicates he would like to purchase the product using the AR application. The product is put into a purchasing queue and the intelligent software agent provides for the remaining transaction requirements.).
With respect to claim 3, Calman teaches the limitations of claim 1 and further discloses:
obtaining location data for the surrounding area from a device; and associating the surrounding area with the vendor for the item when the location data matches a vendor location (Calman [0120] FIG. 7 also illustrates an indicator 520. Here, a system configured to perform the process described in FIGS. 1 and 2 automatically determines that the object depicts an identified location. In some embodiments, the system may makes this location determination by identifying from the image a street sign, or a famous landmark, or a street number that is visible on a local business or a home, etc. In other embodiments, a system makes this location determination using one or more other location determining mechanisms. For example, the system may make this location determination by communicating with a global positioning system (GPS) satellite that receives a signal from a positioning system device located in the mobile device 312. Also as explained with respect to FIG. 4 above, in alternate embodiments, a system makes this location determination by determining a network address associated with the mobile device 312 or by identifying a location of a cell site (or cell tower) that is located closest to the mobile device 312. In some embodiments, one or more of the above location determining mechanisms may be used in combination with each other in order to confirm the accuracy of the location identified by one of the location determining mechanisms; Calman [0089] In various embodiments, information associated with or related to one or more objects that is retrieved for presentation to a user via the mobile device may be permanently or semi-permanently associated with the object. In other words, the object may be "tagged" with the information. In some embodiments, a location pointer is associated with an object after information is retrieved regarding the object. In this regard, subsequent mobile devices capturing the object for recognition may retrieve the associated information, tags and/or pointers in order to more quickly retrieve information regarding the object…Such tags and/or postings may be stored in a predetermined memory and/or database for ease of searching and retrieval).
With respect to claim 5, Calman teaches the limitations of claim 2 and further discloses: displaying the request for the item to the vendor for the item; monitoring interactions of the vendor for the item with the request for the item; and updating the request for the item based on the interactions (Calman [0044] In some embodiments, the apparatus may receive information about individuals associated with the user from a merchant (e.g., an electronics company) that manages a database including transaction history (including purchases and returns) associated with one or more individuals. Therefore, a merchant may have a system that automatically stores digital copies of receipts of transactions executed by one or more individual. An individual may have a choice to opt-in or opt-out of sharing such transaction data with the merchant; Calman [0045] In some embodiments, apparatus may receive data through an API. In this way, the data may be stored in a separate API and be implemented by request from the mobile device and/or server accesses another application by way of an API.; Calman [0047] Regarding block 130, the apparatus having the process flow 100 may use any means to determine that an individual determined at block 120 is related to an object identified at block 110. As used herein, the phrase "related to" may mean that an individual has engaged in a transaction associated with the object (e.g., purchase or return of a product), has written something about the object (e.g., blog post or message on a social network about a product, business, location, etc.), has previously visited the object (e.g., a location or a business), or the like. [see also Calman [0080] In some embodiments, the indicator may provide an Internet hyperlink to enable the user to obtain further information about the product or enable the user to purchase the product by directing the user to a website where product is offered for sale; Calman [0081] in some embodiments, the user then may purchase the product from the website of the store where the user is currently located, pay for the product online, and then take the product from the store)).
With respect to claim 6, Calman teaches the limitations of claim 1 and further discloses:
storing the virtual model for the vendor for the item in the vendor database (Calman [0039] In yet another embodiment of block 110, the information associated with the image may match one or more pieces of identifying data, such that the apparatus determines that the image depicts more than one object. In such instances, the user may be presented with multiple candidate identifications and may opt to choose the appropriate identification or input a different identification. …. Upon input by the user identifying the object, the apparatus may "learn" from the input and store additional identifying data in order to avoid multiple identification candidates for the same object in future identifications.; Calman [0089] In various embodiments, information associated with or related to one or more objects that is retrieved for presentation to a user via the mobile device may be permanently or semi-permanently associated with the object. In other words, the object may be "tagged" with the information. In some embodiments, a location pointer is associated with an object after information is retrieved regarding the object. In this regard, subsequent mobile devices capturing the object for recognition may retrieve the associated information, tags and/or pointers in order to more quickly retrieve information regarding the object… In some embodiments, the information gathered through the recognition and information retrieval process may be posted by the user in association with the object. Such tags and/or postings may be stored in a predetermined memory and/or database for ease of searching and retrieval.
With respect to claim 7, Calman teaches the limitations of claim 2 and further discloses:
obtaining a profile of the user and modifying the fulfillment information for the item based on the profile of the user (Calman [0080] In some embodiments, the indicator may provide an Internet hyperlink to enable the user to obtain further information about the product or enable the user to purchase the product by directing the user to a website where product is offered for sale; Calman [0081] in some embodiments, the user then may purchase the product from the website of the store where the user is currently located, pay for the product online, and then take the product from the store.; Calman [0113] In this way, the intelligent software may automatically order the product after the user indicates he would like to purchase the product using the AR application. The product is put into a purchasing queue and the intelligent software agent provides for the remaining transaction requirements.).
With respect to claims 8-10, 12-14, the limitations are analogous to the limitations of claims 1-3, 5-7, and are thereby rejected for the same reasons identified above with respect to claims 1-3, 5-7. In addition, Calman further teaches: A computer system for correlating consumer items and commercial vendors, the computer system comprising: one or more processors, one or more computer-readable memories, and one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media for execution by at least one of the one or more processors via at least one of the one or more computer-readable memories [to execute the steps identified above with respect to the method claims] (see Fig. 3, [0082]-[0089], Fig. 4 [0090]-[0103])
With respect to claims 15-17 and 19-20, the limitations are analogous to the limitations of claims 1-3, 5-7, and are thereby rejected for the same reasons identified above with respect to claims 1-3, 5-7. In addition, Calman further teaches: A computer program product for correlating consumer items and commercial vendors, the computer program product comprising: one or more computer-readable storage media; program instructions, stored on at least one of the one or more computer-readable storage media [to execute the steps identified above with respect to the method claims] (see Fig. 3, [0082]-[0089], Fig. 4 [0090]-[0103])
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Calman et al. (US 20120229657), as applied above, and further in view of Systrom (20140279039 A1).
With respect to claim 4, Calman teaches the limitations of claim 3 and further discloses:
wherein a machine learning model that [identifies] the vendor based on prior transactions and the location data is used to determine the vendor for the item
(Calman [0047] Regarding block 130, the apparatus having the process flow 100 may use any means to determine that an individual determined at block 120 is related to an object identified at block 110. As used herein, the phrase "related to" may mean that an individual has engaged in a transaction associated with the object (e.g., purchase or return of a product), has written something about the object (e.g., blog post or message on a social network about a product, business, location, etc.), has previously visited the object (e.g., a location or a business), or the like. The phrase "related to" should be afforded the broadest interpretation possible and may capture other embodiments not described herein. In some embodiments, the apparatus may utilize one or more methods, such as pattern recognition algorithms, to analyze first information associated with an image, analyze second information associated with one or more individuals, and compare the first information with the second information. If the first information associated with the image matches second information associated with one or more individuals, either exactly or with a certain degree of confidence, then the apparatus determines that a relationship exists between an object and an individual.; Calman [0048] In some embodiments, the apparatus may use comparison or pattern recognition algorithms such as decision trees, logistic regression, Bayes classifiers, support vector machines, kernel estimation, perceptrons, clustering algorithms, regression algorithms, categorical sequence labeling algorithms, real-valued sequence labeling algorithms, parsing algorithms, general algorithms for predicting arbitrarily-structured labels such as Bayesian networks and Markov random fields, ensemble learning algorithms such as bootstrap aggregating, boosting, ensemble averaging, combinations thereof, and the like to determine that an individual determined at block 120 is related to an object identified at block 110. ; Calman [0050] Upon input by the user identifying the relationship, the apparatus may "learn" from the input and store additional data in order to avoid multiple identification candidates (i.e., relationships) for the same object in future identifications, or present the multiple identification candidates in an order that suits the user's interests (e.g., list transaction relationships higher on the list and messaging relationships lower on the list); Calman [0120] FIG. 7 also illustrates an indicator 520. Here, a system configured to perform the process described in FIGS. 1 and 2 automatically determines that the object depicts an identified location. In some embodiments, the system may makes this location determination by identifying from the image a street sign, or a famous landmark, or a street number that is visible on a local business or a home, etc. In other embodiments, a system makes this location determination using one or more other location determining mechanisms. For example, the system may make this location determination by communicating with a global positioning system (GPS) satellite that receives a signal from a positioning system device located in the mobile device 312.
Calman establishes the identification of vendors associated with user purchases based on user location, which strongly suggest making a “prediction” using a learning-based model. Systrom more clearly establishes transaction and location data can be used to predict the vendor (Systrom [0033] Block S120 can also implement supervised or semi-supervised machine learning techniques, such as by augmenting a database of images with automatically-generated tags confirmed by users or images with manually-entered tags in order to improve the object image detection algorithm.; [0038] disclosing selecting the merchant from a set of suitable online or brick-and- mortar stores based on …a user's transaction history (e.g., has the user previously shopped with this merchant?), … a user location (e.g., GPS location data of a smartphone associated with the user and that is physically nearby a particular brick-and-mortar merchant), etc.)
One of ordinary skill in the art would have recognized that applying data analysis of Systrom to the teaching of Calman would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate analysis that associates locations, transactions and vendors to the conveyance of information to users through augmented interfaces, which improves the interface between buyers and sellers through improved, targeted content generation.
With respect to claim 11, the limitations are analogous to the limitations of claims 4, and are thereby rejected for the same reasons identified above with respect to claim 4.
With respect to claim 18, the limitations are analogous to the limitations of claim 4, and are thereby rejected for the same reasons identified above with respect to claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Signorelli et al. (20140214547 A1), disclosing systems and methods for augmented retail reality.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Trammell whose telephone number is 571-272-6712. The examiner can normally be reached Monday - Friday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trammell can be reached at (571) 272-6712. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABBY J FLYNN/ Supervisory Patent Examiner, Art Unit 3663