DETAILED ACTION
This rejection is in response to Amendments filed 11/13/2025.
Claims 1-18 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 11/13/2025 have been fully considered but they are not persuasive.
With respect to applicant’s arguments on pages 11-13 of remarks filed 11/13/2025 that the claims do not fall in the grouping of certain methods of organizing human activity because the claims recite an augmented reality system which simultaneously presents two augmented reality presentations to two separate mobile devices during a video call where the augmented reality presentations provide an improved GUI, Examiner respectfully disagrees.
The augmented reality system, computer functionality to simultaneously present augmented reality presentations to devices during a video call, where the augmented reality presentations provides improved graphical user interface are not considered as directed to the abstract idea. The claim limitations regarding the augmented reality system, initiating a video call between devices, and presenting augmented reality presentations via a display device are considered as additional limitations rather than directed towards the abstract idea. Therefore, the aforementioned computer related and augmented reality related claim limitations are not directed towards the abstract idea or fall within the grouping of certain methods or organizing human activity.
With respect to applicant’s arguments on pages 13-15 of remarks filed 11/13/2025 that the claims are integrated into a practical application because the invention provides a technological solution that enables a user to make selections that are incorporated into personalized augmented reality presentations for people to see and compare available options when people are not physically together and differentiating selections made by users with markers to recognize who made the selection, Examiner respectfully disagrees.
If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies a technical problem and explains the details of an unconventional technical solution expressed in the claim, or identifies technical improvements realized by the claim over the prior art. See MPEP 2106.05(a).
Applicant’s specification recites in paragraph [0027]: [t]he display device 212, 228 is configured to present user interfaces (e.g., user interfaces associated with the collaborative shopping experience), images (e.g., images of product display units), augmented reality presentations, etc. Accordingly, the display device 212, 228 can of any suitable type, such as a light emitting diode (LED) display device or liquid crystal display (LCD) display device and, in some embodiments, can be integrated with the user input 210, 226 device such as, for example, a touchscreen. Applicant’s specification recites in paragraph [0015]: one or both of the people can make selections from the augmented reality presentations. Applicant’s specification in paragraph [0062] recites: “the discussion of FIG. 5 provides additional detail regarding a system for
presenting information to customers via an augmented reality presentation, the discussion of FIG. 6 provides additional detail regarding the operations of such a system.” The specification recites that the display device using augmented reality is merely used as a tool to present and receive selections from the user interface. It is unclear how presenting personal data using augmented reality on a user interface that a user can select as well as indications of selections that are visually different improves a user interface or technology. The specification fails to provide further detail on how the user interface or technology is improved. Therefore, the claims are not integrated into a practical application.
With respect to applicant’s arguments on pages 15-16 of remarks filed 11/13/2025 that the claims recite a specific manner of displaying items that provides a specific improvement by displaying graphic user elements similar to Example 37, Examiner respectfully disagrees.
In addition, a specific way of achieving a result is not a stand-alone consideration in Step 2A Prong Two. However, the specificity of the claim limitations is relevant to the evaluation of several considerations including the use of a particular machine, particular transformation and whether the limitations are mere instructions to apply an exception. See MPEP §§ 2106.05(b), 2106.05(c), 2106.04(d)(I), and 2106.05(f).
Example 37 is directed to relocating icons on a graphical user interface and the claims are integrated into a practical application because the additional elements recite specific manner of automatically displaying icon. Example 37 was directed to relocation of icons by automatically moving icons based on the determined amount of use of the icon which is not analogous to making selections on a user interface and displaying visually different indications of the selections.
A specific way of achieving a result (e.g. enabling selections and displaying visually different indications of the selections) is not a stand-alone consideration in Step 2A Prong Two.
With respect to applicant’s arguments on page 17 of remarks filed 11/13/2025 that the claims have additional elements, individually and in combination that provide an inventive concept because the claims improve the user interface by enabling user selections on a user interface and displaying visually different indications of user selections and personalized data, Examiner respectfully disagrees.
Although the conclusion of whether a claim is eligible at Step 2B requires that all relevant considerations be evaluated, most of these considerations were already evaluated in Step 2A Prong Two. Thus, in Step 2B, examiners should: carry over their identification of the additional element(s) in the claim from Step 2A Prong Two; and carry over their conclusions from Step 2A Prong Two on the considerations discussed in MPEP §§ 2106.05(a) - (c), (e) (f) and (h) and 2106.05(a)(II).
To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f).
The display device recited in the claim is merely used as a tool to present a user interface. It is unclear how presenting personalized data using augmented reality on a user interface where a user can make selection and indications of selections that are visually different improves a user interface. The specification fails to provide further detail on how the user interface is improved. Therefore, the additional elements both individually and in combination do not provide an inventive concept.
With respect to applicant’s arguments on page 18-19 of remarks filed 11/13/2025 that Kerger and Li do not teach receiving an indication of a user of the second mobile device retrieving personalized data specific to the user and associated with the selected product in claims 1 and 10 because Kerger describes comments from users specific to the item, Examiner respectfully disagrees.
Applicant’s arguments with respect to the aforementioned claim amendments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to applicant’s arguments on pages 18-19 of remarks filed 11/13/2025 that Kerger does not teach the augmented reality presentation including a button for presenting the personalized data and Li does not teach augmented reality presentation comprising personalized data, Examiner respectfully disagrees.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
With respect to applicant’s arguments on page 19 of remarks filed 11/13/2025 that Kerger and Li do not teach personalized data specific to each user, Examiner respectfully disagrees.
Applicant’s arguments with respect to the aforementioned claim amendments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to applicant’s arguments on pages 19-20 of remarks filed 11/13/2025 that the combination of Kerger and Li does not teach concurrent display of two different augmented reality presentations in claims 9 and 18, Examiner respectfully disagrees.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., concurrent display of two different augmented reality presentations) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Computer Program Per Se
Claims 1-3 and 8-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are drawn to a computer program per se. Products that do not have a physical or tangible form, such as information (often referred to as "data per se") or a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations. See MPEP 2106.03.
System claim 1 recites: “an augmented reality system for collaborative shopping, the augmented reality system comprising: an application configured to be executed by a first mobile device and a second mobile device, the application when executed by the first mobile device causes the first mobile device to perform operations comprising…” System claim 1 is drawn to computer program per se because the system claim comprises an “application.” The “first mobile device” and the “second mobile device” are not considered as structural recitations because the “first mobile device” and the “second mobile device” are not positively recited in the system claim. In addition, dependent claims 2-3 and 8-9 do not recite any structural recitations. Therefore, claims 1-3 and 8-9 are drawn to computer program per se.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more.
Under Step 1 of the Subject Matter Eligibility Test, it must be considered whether the claims are directed to one of the four statutory classes of invention. See MPEP § 2106. In the instant case, claims 1-9 are directed to a system (e.g. assuming claims 1-3 and 8-9 are not drawn to computer program per se) and claims 8-18 are directed to a method which falls within one of the four statutory categories of invention(process/apparatus). Accordingly, the claims will be further analyzed under revised step 2:
Under step 2A (prong 1) of the Subject Matter Eligibility Test, it must be considered whether the claims recite a judicial exception if so, then determine in Prong Two if the recited judicial exception is integrated into a practical application of that exception. If the claim recites a judicial exception (i.e., an abstract idea), the claim requires further analysis in Prong Two. One of the enumerated groupings of abstract ideas is defined as certain methods of organizing human activity that includes fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). See MPEP § 2106.04(a)(2).
Regarding representative independent claim 1, recites the abstract idea of:
capturing, …, an image of a product display unit, wherein the image of the product display unit includes one or more products; and
transmitting, …, the image of the product display unit; and
receiving, …, the image of the product display unit;
receiving, …, a second user input, wherein the second user input identifies a selected product, and wherein the selected product is one of the one or more products;
receiving an indication of a user …retrieving personalized data specific to the user… based on the indication of the user …and a product identifier associated with the selected product…;
….wherein the … marker denotes the selected product in the image …, and wherein…, the second user information comprising personalized data associated with the selected product for a user…; and
transmitting, …, an indication of the selected product;
receiving, …, the indication of the selected product;
identifying, based on the indication of the selected product, the selected product from within the image of the product display unit,
receiving an indication of a user …;
retrieving personalized data specific to the user …based on the indication of the user …and the product identifier associated with the selected product;
wherein the first… marker denotes the selected product in the image …, and wherein the first … marker is visually different from the second… marker, and wherein …, the first user information comprising personalized data associated with the selected product for a user ….
This arrangement amounts to certain methods of organizing human activity associated with sales activities and commercial interactions involving capturing an image of products, transmitting the image, receiving the image, selecting an item from the image, receiving an indication, retrieving personalized data specific to the user, transmitting the selected product, and identifying the selected item from the image. Such concepts have been considered ineligible certain methods of organizing human activity by the Courts. See MPEP § 2106.
The Step 2A (prong 2) of the Subject Matter Eligibility Test, is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. See MPEP § 2106.
In this instance, the claims recite the additional elements such as:
An augmented reality system for collaborative shopping, the augmented reality system comprising: an application configured to be executed by a first mobile device and a second mobile device, the application when executed by the first mobile device causes the first mobile device to perform operations comprising: initiating a video call, via a communications network, between the first mobile device and the second mobile device; … via an image capture device of the first mobile device…; … via the communications network …; the application when executed by the second mobile device causes the second mobile device to perform operations comprising:…; …via the communications network…;… via a user input device of the second mobile device…; …of the second mobile device… of the second mobile device… of the second mobile device …; generating a second mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation includes the image of the product display unit and a second mobile device marker, wherein the second mobile device marker…of the product display unit, wherein the second mobile device augmented reality presentation comprises a second button associated with the selected product, and wherein selecting the second button enables the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed on the second mobile device augmented reality presentation; presenting, via a second display device associated with the second mobile device, the second mobile device augmented reality presentation;… via the communications network…; wherein the application when executed by the first mobile device causes the first mobile device to perform operations further comprising: … via the communications network…;… of the first mobile device…;… of the first mobile device… of the first mobile device…; and generating a first mobile device augmented reality presentation, wherein the first mobile device augmented reality presentation includes the image of the product display unit and a first mobile device marker, wherein the first mobile device marker denotes the selected product in the image of the product display unit, wherein the first mobile device marker is visually different from the second mobile device marker, and wherein the first mobile device augmented reality presentation comprises a first button associated with the selected product, and wherein selecting the first button enables the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed on the first mobile device augmented reality presentation (Claim 1);
… of the first mobile device …the first mobile device,…of the first mobile device, …of the first mobile device, …of the first mobile device, ….of the second mobile device …of the second mobile device, …of the second mobile device,…of the second mobile device, …of the second mobile device (Claims 2 & 11);
the first mobile device marker and the second mobile device marker (Claims 3 & 12);
an image recognition server, wherein the image recognition server is configured to: …, from the first mobile device, …; and …, to the first mobile device, …; wherein the application when executing on the first mobile device causes the first mobile device to perform operations further comprising: …to the second mobile device (Claim 4 &13);
the image recognition server…machine learning algorithm (Claims 5 & 14);
a call handling server, wherein the call handling server is configured to: initiate the video call between the first mobile device and the second mobile device; and …, between the first mobile device and the second mobile device, images captured by the first mobile device …at one or more of the first mobile device and the second mobile device (Claims 7 and 16);
a video feed captured by the first mobile device (Claims 8 and 17);
wherein the application when executed by the first mobile device further causes the first mobile device to perform operations comprising: presenting, during the video call, via a first display device associated with the first mobile device, the first mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation is concurrently presented via the second display device ; transmitting, for presentation on the second mobile device, an indication of the second product; and updating the first mobile device augmented reality presentation to include a third marker denoting the second product, wherein the third marker is different from the first mobile device marker (Claims 9 and 18);
…, by an application executing on a first mobile device, …; initiating, by a communications network, a video call between the first mobile device and a second mobile device; by the application executing on the first mobile device via the communications network, …; by the application executing on the second mobile device via the communications network, ….; …, by the application executing on the second mobile device,… of the second mobile device… of the second mobile device… of the second mobile device …; generating, by the application executing on the second mobile device, a second mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation includes the image of the product display unit and a second mobile device marker, wherein the second mobile device marker denotes the selected product in the image of the product display unit, and wherein the second mobile device augmented reality presentation comprises a second button associated with the selected product, and wherein selecting the second button enables the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed on the second mobile device augmented reality presentation; causing presentation, by the application executing on the second mobile device, of the second mobile device augmented reality presentation;…, by the application executing on the second mobile device via the communications network, …, by the application executing on the first mobile device via the communications network, …, by the application executing on the first mobile device based on the indication of the selected product, …; …of the first mobile device … of the first mobile device … of the first mobile device…; and generating, by the application executing on the first mobile device, a first mobile device augmented reality presentation, wherein the first mobile device augmented reality presentation includes the image of the product display unit and a first mobile device marker, wherein the first mobile device marker denotes the selected product in the image of the product display unit, and wherein the first mobile device marker is visually different from the second mobile device marker, and wherein the first mobile device augmented reality presentation a first button associated with the selected product, and wherein selecting the first button enables the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed on the first mobile device augmented reality presentation (Claim 10).
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Independent claims and dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. For example, independent claims and dependent claims are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same. See MPEP § 2106.
In Step 2A, several additional elements were identified as additional limitations:
An augmented reality system for collaborative shopping, the augmented reality system comprising: an application configured to be executed by a first mobile device and a second mobile device, the application when executed by the first mobile device causes the first mobile device to perform operations comprising: initiating a video call, via a communications network, between the first mobile device and the second mobile device; … via an image capture device of the first mobile device…; … via the communications network …; the application when executed by the second mobile device causes the second mobile device to perform operations comprising:…; …via the communications network…;… via a user input device of the second mobile device…; …of the second mobile device… of the second mobile device… of the second mobile device …; generating a second mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation includes the image of the product display unit and a second mobile device marker, wherein the second mobile device marker…of the product display unit, wherein the second mobile device augmented reality presentation comprises a second button associated with the selected product, and wherein selecting the second button enables the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed on the second mobile device augmented reality presentation; presenting, via a second display device associated with the second mobile device, the second mobile device augmented reality presentation;… via the communications network…; wherein the application when executed by the first mobile device causes the first mobile device to perform operations further comprising: … via the communications network…;… of the first mobile device…;… of the first mobile device… of the first mobile device…; and generating a first mobile device augmented reality presentation, wherein the first mobile device augmented reality presentation includes the image of the product display unit and a first mobile device marker, wherein the first mobile device marker denotes the selected product in the image of the product display unit, wherein the first mobile device marker is visually different from the second mobile device marker, and wherein the first mobile device augmented reality presentation comprises a first button associated with the selected product, and wherein selecting the first button enables the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed on the first mobile device augmented reality presentation (Claim 1);
… of the first mobile device …the first mobile device,…of the first mobile device, …of the first mobile device, …of the first mobile device, ….of the second mobile device …of the second mobile device, …of the second mobile device,…of the second mobile device, …of the second mobile device (Claims 2 & 11);
the first mobile device marker and the second mobile device marker (Claims 3 & 12);
an image recognition server, wherein the image recognition server is configured to: …, from the first mobile device, …; and …, to the first mobile device, …; wherein the application when executing on the first mobile device causes the first mobile device to perform operations further comprising: …to the second mobile device (Claim 4 &13);
the image recognition server…machine learning algorithm (Claims 5 & 14);
a call handling server, wherein the call handling server is configured to: initiate the video call between the first mobile device and the second mobile device; and …, between the first mobile device and the second mobile device, images captured by the first mobile device …at one or more of the first mobile device and the second mobile device (Claims 7 and 16);
a video feed captured by the first mobile device (Claims 8 and 17);
wherein the application when executed by the first mobile device further causes the first mobile device to perform operations comprising: presenting, during the video call, via a first display device associated with the first mobile device, the first mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation is concurrently presented via the second display device ; transmitting, for presentation on the second mobile device, an indication of the second product; and updating the first mobile device augmented reality presentation to include a third marker denoting the second product, wherein the third marker is different from the first mobile device marker (Claims 9 and 18);
…, by an application executing on a first mobile device, …; initiating, by a communications network, a video call between the first mobile device and a second mobile device; by the application executing on the first mobile device via the communications network, …; by the application executing on the second mobile device via the communications network, ….; …, by the application executing on the second mobile device,… of the second mobile device… of the second mobile device… of the second mobile device …; generating, by the application executing on the second mobile device, a second mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation includes the image of the product display unit and a second mobile device marker, wherein the second mobile device marker denotes the selected product in the image of the product display unit, and wherein the second mobile device augmented reality presentation comprises a second button associated with the selected product, and wherein selecting the second button enables the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed on the second mobile device augmented reality presentation; causing presentation, by the application executing on the second mobile device, of the second mobile device augmented reality presentation;…, by the application executing on the second mobile device via the communications network, …, by the application executing on the first mobile device via the communications network, …, by the application executing on the first mobile device based on the indication of the selected product, …; …of the first mobile device … of the first mobile device … of the first mobile device…; and generating, by the application executing on the first mobile device, a first mobile device augmented reality presentation, wherein the first mobile device augmented reality presentation includes the image of the product display unit and a first mobile device marker, wherein the first mobile device marker denotes the selected product in the image of the product display unit, and wherein the first mobile device marker is visually different from the second mobile device marker, and wherein the first mobile device augmented reality presentation a first button associated with the selected product, and wherein selecting the first button enables the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed on the first mobile device augmented reality presentation (Claim 10).
These additional limitations, including the limitations in the independent claims and dependent claims, do not amount to an inventive concept because the recitations above do not amount to an improvement in the functioning of a computer or any other technology or technical field, apply the judicial exception with, or by use of, a particular machine, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. In addition, they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, and 4-11, and 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kerger et al. (US Pub. No. 20170249674 A1, hereinafter “Kerger”) and further in view of Carpenter, IV et al. (US Pub. No. 20210111917 A1, hereinafter “Carpenter, IV”).
Regarding claims 1 and 10
Kerger discloses an augmented reality system for collaborative shopping, the augmented reality system comprising: an application configured to be executed by a first mobile device and a second mobile device, the application when executed by the first mobile device causes the first mobile device to perform operations comprising (Kerger, [0032]: augmented reality; [0035]: one or more devices):
capturing, via an image capture device of the first mobile device, an image of a product display unit, wherein the image of the product display unit includes one or more products (Kerger, [0004]: images of items; [0032] capture images);
and transmitting, via the communications network, the image of the product display unit; and the application when executed by the second mobile device causes the second mobile device to perform operations comprising: receiving, via the communications network, the image of the product display unit (Kerger, FIG. 3, [0040]: user interface shared to interested user with image of items; [0004]: first user shares image of one or more items with second users; [0028]: devices; [0029]: communication network; [0023]: application);
receiving, via a user input device of the second mobile device, a second user input, wherein the second user input identifies a selected product, and wherein the selected product is one of the one or more products; generating a second mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation includes the image of the product display unit and a second mobile device marker, wherein the second mobile device marker denotes the selected product in the image of the product display unit, and wherein the second mobile device augmented reality presentation comprises a second button associated with the selected product, and wherein selecting the second button enables… the selected product to be displayed on the second mobile device augmented reality presentation; presenting, via a second display device associated with the second mobile device, the second mobile device augmented reality presentation (Kerger, [0045]: user clicks on segment of items and display information based on tags of the selected segment in digital image; [0006]: populating the one or more tags associated with the one or more items recognized in the digital image; [0032]: AR images; [0043]: one or more tags that relate to the items depicted in each segment, the details relevant to each item which may be shared by the user; [0046]: the displayed information includes comments about the item(s) depicted in the selected segment; [0038]: the users of the interested user terminals 130 may click on the segment using a mouse and in response to display select information about the items depicted in the selected segment);
and transmitting, via the communications network, an indication of the selected product; wherein the application when executed by the first mobile device causes the first mobile device to perform operations further comprising: receiving, via the communications network, the indication of the selected product (Kerger, [0038]: The potential interested users can then communicate with the sharing user about the specific item(s) in which the interested user has expressed interest and purchase item; [0029]: communication network; [0037]: interested user terminal);
identifying, based on the indication of the selected product, the selected product from within the image of the product display unit; and generating a first mobile device augmented reality presentation, wherein the first mobile device augmented reality presentation includes the image of the product display unit and a first mobile device marker, wherein the first mobile device marker denotes the selected product in the image of the product display unit, and wherein the first mobile device marker is visually different from the second mobile device marker, and wherein the first mobile device augmented reality presentation comprises a first button associated with the selected product, and wherein selecting the first button enables … the selected product to be displayed on the first mobile device augmented reality presentation (Kerger, [0034]: change the digital image into a more meaningful representation that differentiates certain areas within the digital image that correspond to the items (e.g., based on lines, curves, boundaries, etc. that may differentiate one object from another); FIG. 2, [0035]: automatically identify and segment the objects with a plurality differently curved boundaries for each segment surrounding each item in the image; [0027]: in response to selected item being purchased, the selected item becomes unavailable and the item appearance within the image is altered or dimmed with visual indication for the sharing user; FIG. 2, [0039]: the server 150 may determine when certain items have been sold or other activities have resulted in certain items becoming unavailable and alter and display digital image of item for sharing user terminal; [0032]: AR images; [0004]: first user sharing image with second user selecting segments; [0041]: a comments section 330 that includes descriptive details about each item that was initially shared regardless of whether any items have since been sold or otherwise become unavailable (e.g. comments about vase being sold includes description of item displayed in strikethrough); [0045]: user clicks on segment of items and display information based on tags of the selected segment in digital image; [0006]: populating the one or more tags associated with the one or more items recognized in the digital image; [0032]: AR images; [0043]: one or more tags that relate to the items depicted in each segment, the details relevant to each item which may be shared by the user; [0046]: the displayed information includes comments about the item(s) depicted in the selected segment; [0038]: the users of the interested user terminals 130 may click on the segment using a mouse and in response to display select information about the items depicted in the selected segment).
Kerger does not teach:
initiating a video call, via a communications network, between the first mobile device and the second mobile device;
receiving an indication of a user of the second mobile device; retrieving personalized data specific to the user of the second mobile device based on the indication of the user of the second mobile device and a product identifier associated with the selected product; …the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed;
receiving an indication of a user of the first mobile device retrieving personalized data specific to the user of the first mobile device based on the indication of the user of the first mobile device and the product identifier associated with the selected product;…the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed.
However, Carpenter, IV teaches:
initiating a video call, via a communications network, between the first mobile device and the second mobile device (Carpenter, IV, [0034]: video call between users interacting through their devices; [0046]: communication network);
receiving an indication of a user of the second mobile device; retrieving personalized data specific to the user of the second mobile device based on the indication of the user of the second mobile device and a product identifier associated with the selected product; …the personalized data specific to the user of the second mobile device and associated with the selected product to be displayed (Carpenter, IV, [0035]: user transmit request with personal information about the user and accept and transmit information about the user and any other information; [0046]: user inputs their personal information and item of interest; [0040]: send marketing materials based on information input by user in request to user; [0033]: plurality of participant devices;);
receiving an indication of a user of the first mobile device retrieving personalized data specific to the user of the first mobile device based on the indication of the user of the first mobile device and the product identifier associated with the selected product;… the personalized data specific to the user of the first mobile device and associated with the selected product to be displayed (Carpenter, IV, [0035]: user transmit request with personal information about the user and accept and transmit information about the user and any other information; [0046]: user inputs their personal information and item of interest; [0040]: select and deliver marketing materials based on information input by user in request to user; [0054]: display marketing information on interface; [0033]: plurality of participant devices).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the communication of devices and augmented reality presentation of Kerger with devices that communicate via a video call and receiving personalized data specific to the user of the devices based on receiving an indication as taught by Carpenter, IV because the results of such a modification would be predictable. Specifically, Kerger would continue to teach the communication of devices and augmented reality presentation except that now the devices communicate via a video call and receiving personalized data specific to the user of the devices based on receiving an indication according to the teachings of Carpenter, IV in order to share personal information during a video call. This is a predictable result of the combination. (Carpenter, IV, [0002-0006]).
Regarding claims 2 and 11
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 1, wherein the personalized data specific to the user of the first mobile device comprises previous purchase information of the user of the first mobile device, a personalized promotion specific to the user of the first mobile device, inclusion information for wish list of the user of the first mobile device, or suggestion based on the previous purchase information of the user of the first mobile device, and wherein the personalized data specific to the user of the second mobile device comprises previous purchase information associated with the user of the second mobile device, a personalized promotion specific to the user of the second mobile device, inclusion information for wish list of the user of the second mobile device, or suggestion based on the previous purchase information of the second mobile device (Carpenter, IV, [0035]: user transmit request with personal information about the user and accept and transmit information about the user and any other information; [0046]: user inputs their personal information and item of interest; [0040]: send marketing materials based on information input by user in request to user; [0054]: marketing information can display offers to provide products and/or services, discount offers on products and/or services, or any other information known in the art to market products and/or services to people; [0075]: marketing materials can include offers to buy or sell goods and services; [0033]: plurality of participant devices).
The motivation to combine Kerger and Carpenter is the same as set forth above in claim 1.
Regarding claims 4 and 13
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 1, further comprising: an image recognition server, wherein the image recognition server is configured to:
receive, from the first mobile device, the image of the product display unit (Kerger, [0035]: sharing user uploads images to server; [0004]: images of items; [0032] capture images);
detect, based on the image of the product display unit, the one or more products included in the image of the product display unit (Kerger, [0036]: scene detection to identify items within image; [0004]: images of items);
determine, within the image of the product display unit, boundaries for each of the one or more products; segment, based on the boundaries for each of the one or more products, the image of the product display unit into sections; associate each of the one or more products with one of the sections (Kerger, [0042]:image segmentation of items based on boundaries to differentiate items within image and identify items in images detected using computer vision; [0043]: the one or more image segments may then be associated with one or more tags that relate to the items depicted in each segment);
and transmit, to the first mobile device, an indication of the associations between the one or more products and the sections; wherein the application when executing on the first mobile device causes the first mobile device to perform operations further comprising: transmitting the indication of the associations between the one or more products and the sections to the second mobile device with the image of the product display unit (Kerger, [0035]: after the image segmentation technology has been applied to the digital image and the one or more objects depicted therein have been suitably identified, the sharing user may review the segmented image and share image of items to be visible to interested user terminals; [0037]: interested user views sharing user’s digital image).
Regarding claims 5 and 14
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 4, wherein the image recognition server determines the boundaries for each of the one or more products based on a machine learning algorithm (Kerger, [0042]:image segmentation of items based on boundaries to differentiate items within image and identify items in images detected using computer vision; [0043]: the one or more image segments may then be associated with one or more tags that relate to the items depicted in each segment; [0034]: computer vision implements image segmentation technology using algorithms).
Regarding claims 6 and 15
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 4, wherein the indication of the selected product includes an indication of one of the sections (Kerger, [0037]: select certain segments of shared digital image; [0038]: clicking on segment and identify selection; [0004]: increase focus on the item(s) depicted in the selected segment).
Regarding claims 7 and 16
The combination of Kerger and Carpenter, IV the augmented reality system of claim 1, further comprising: and transmit, between the first mobile device and the second mobile device, images captured by the first mobile device and selections received via user input at one or more of the first mobile device and the second mobile device (Kerger, [0038]: The potential interested users can then communicate with the sharing user about the specific item(s) in which the interested user has expressed interest and purchase item; [0029]: communication network; [0037]: interested user terminal; FIG. 3, [0040]: user interface shared to interested user with image of items; [0004]: first user shares image of one or more items with second users; [0028]: devices; [0023]: application; [0032]: video images).
Kerger does not teach:
a call handling server, wherein the call handling server is configured to: initiate the video call between the first mobile device and the second mobile device;
However, Carpenter, IV teaches:
a call handling server, wherein the call handling server is configured to: initiate the video call between the first mobile device and the second mobile device (Carpenter, IV, [0034]: video calling between devices; [0041]: video calling via central computer; [0052]: initiating two-way video communication with both the participant and the character).
The motivation to combine Kerger and Carpenter, IV is the same as set forth above in claim 1.
Regarding claims 8 and 17
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 1, wherein the image of the product display unit is part of a video feed captured by the first mobile device (Kerger, FIG. 3, [0040]: user interface shared to interested user with image of items; [0004]: first user shares image of one or more items with second users; [0028]: devices; [0023]: application; [0032]: video images that are live).
Regarding claims 9 and 18
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 1, wherein the application when executed by the first mobile device further causes the first mobile device to perform operations comprising:
presenting,…, via a first display device associated with the first mobile device, the first mobile device augmented reality presentation, wherein the second mobile device augmented reality presentation is concurrently presented via the second display device (Kerger, [0045]: user clicks on segment of items and display information based on tags of the selected segment in digital image; [0006]: populating the one or more tags associated with the one or more items recognized in the digital image; [0024]: simultaneously displayed images; [0032]: AR images);
receiving a first user input selecting a second product from the one or more products; transmitting, for presentation on the second mobile device, an indication of the second product; and updating the first mobile device augmented reality presentation to include a third marker denoting the second product, wherein the third marker is different from the first mobile device marker (Kerger, [0038]: The potential interested users can then communicate with the sharing user about the specific item(s) in which the interested user has expressed interest and purchase item; FIG. 2, [0035]: automatically identify the objects depicted in the segments; [0027]: in response to selected item being purchased, the selected item becomes unavailable and the item appearance within the image is altered or dimmed with visual indication for the sharing user; FIG. 2, [0039]: the server 150 may determine when certain items have been sold or other activities have resulted in certain items becoming unavailable and alter and display digital image of item for sharing user terminal; [0032]: AR images; [0034]: change the digital image into a more meaningful representation that differentiates certain areas within the digital image that correspond to the items (e.g., based on lines, curves, boundaries, etc. that may differentiate one object from another.
Kerger does not teach:
… during the video call…
However, Carpenter, IV teaches:
… during the video call…(Carpenter, IV, [0034]: video calling between devices; [0041]: video calling via central computer; [0052]: initiating two-way video communication with both the participant and the character).
The motivation to combine Kerger and Carpenter, IV is the same as set forth above in claim 1.
Claim(s) 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kerger and Carpenter, IV as applied to claim 1 above, and further in view of Li et al. (US Pub. No. 2018/0376104 A1, hereinafter “Li”).
Regarding claims 3 and 12
The combination of Kerger and Carpenter, IV teaches the augmented reality system of claim 1, but does not teach
wherein the first mobile device marker and the second mobile device marker are selected from a list comprising bordering, shading, highlighting, or color-coding.
However, Li teaches:
wherein the first mobile device marker and the second mobile device marker are selected from a list comprising bordering, shading, highlighting, or color-coding (Li, [0079]: presenting and selecting options for annotation (e.g. color); [0057]: designate annotation color; [0091]: user annotates by inputting circle in certain area in image in a certain a color; [0050]: simultaneous online annotating by multiple users).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the marker of Kerger and Carpenter, IV with selecting the marker from a list as taught by Li because the results of such a modification would be predictable. Specifically, Kerger and Carpenter, IV would continue to teach markers except that now the marker is a color according to the teachings of Li in order to annotate using a designated color. This is a predictable result of the combination. (Li, [0057]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is cited as Alapati et al. (US Patent No. 10223755 B2) related to providing in-store shopping experiences, High et al. (US Patent No. 10115139 B2) related to generating shopping lists for a consumer during collaborative shopping, and non-patent literature, "Extended Abstract: CoShopper - Leveraging Artificial Intelligence for an Enhanced Augmented Reality Grocery Shopping Experience," related to Artificial Intelligence (AI) methods with Augmented Reality (AR) techniques to enhance grocery shopping experience through the use of smart glasses.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LATASHA DEVI RAMPHAL whose telephone number is (571)272-2644. The examiner can normally be reached 11 AM - 7:30 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey A. Smith can be reached on 5712726763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LATASHA D RAMPHAL/Examiner, Art Unit 3688 /Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688