DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
This action is in response to the claims filed on October 31, 2025.
Claims 1-20 are pending.
Claims 1, 2, 4, 6, 8-13, 15, 17 and 19-20 have been amended.
Response to Arguments
Applicant’s arguments, see page 8, filed October 31, 2025, with respect to claim objection of claim 10 and 35 USC § 112 rejection of claim 11 have been fully considered and are persuasive. The claim objection of claim 10 and 35 USC § 112 rejection of claim 11 have been withdrawn.
Applicant's arguments (as seen below) filed October 31, 2025 regarding 35 USC § 101 rejection of claims 1-20 (see pages 9-12) have been fully considered but they are not persuasive.
The Office rejects claims 1-20 under 35 U.S.C 101, arguing that the claims are directed to non-statutory subject matter. More specifically, the Office alleges that claims 1-20 recite methods of organizing human activity and do not include additional elements that integrate the abstract idea into a practical application. Applicant respectfully disagrees.
Pursuant to prong two of step 2A, a claim is still patent eligible if the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
One way to determine integration into a practical application is when the claimed invention improves the functioning of a computer or improves another technology or technical field. To evaluate an improvement to a computer or technical field, the specification must set forth an improvement in technology and the claim itself must reflect the disclosed improvement. See MPEP 2106.04(d)(1) and 2106.05(a). The present disclosure provides a technical improvement in computer-implemented object recognition and navigation systems. Existing retail systems "typically lack a streamlined mechanism for customers to quickly identify items they have seen in online posts and then locate these promoted products within a physical store. The gap between online discovery and offline purchasing leads to a suboptimal consumer experience and potentially reduces sales opportunities for retailers." Applicant's specification, at [0010]. The amended claims address the problem by introducing a two-level machine-learning feature extraction and similarity analysis pipeline that enables real-time mapping between user-provided digital media and in-store inventory. Specifically, the amended claim 1 recites "identifying objects depicted within the digital media, comprising: extracting visual feature vectors and functional feature vectors from the digital media, and inputting the extracted visual and functional feature vectors into one or more trained neural networks (NNs) to output one or more product identifiers corresponding to the objects." The amended claim 1 further recite identifying the visually similar or functionally similar items available at the physical location "based on the visual and functional vector feature vectors extracted from the digital media and corresponding visual and functional vector feature vectors extracted from the collected information," and "generating a guidance that navigates to at least one of the set of target items within the physical location." These disclosed and claimed techniques introduce a specific technological solution to a real-world object recognition and data mapping issue. The claims recite an improved computer-based system for multimodal object recognition, similarity computation, and spatial navigation, implemented via trained neural networks and feature vector analysis.
The August Memorandum provides additional clarity for prong two of step 2A. As the August Memorandum explains, prong two "considers the claim as a whole," and "[t]he way in which the additional elements use or interact with the exception may integrate the judicial exception into a practical application." Pages 3-4 (emphasis in original). Accordingly, "the additional limitations should not be evaluated in a vacuum, completely separate from the recited judicial exception." Page 4. Instead, "the analysis should take into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application."Id.
At prong 2, a claim is eligible if it "reflects an improvement to the functioning of a computer or to another technology or technical field." August Memorandum, page 4. As the August Memorandum explains, "[a]n important consideration in determining whether a claim improves technology or a technical field is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome."Id. To do so, one must "consult the specification to determine whether the disclosed invention improves technology or a technical field," and "evaluate the claim to ensure it reflects the disclosed improvement" noting that "[t]he specification does not need to explicitly set forth the improvement," but must "describe the invention such that the improvement would be apparent to one of ordinary skill in the art" and that "[t]he claim itself does not need to explicitly recite the improvement described in the specification."Id.
Further, "Examiners are cautioned not to oversimplify claim limitations and expand the application of the 'apply it' consideration," and the August Memorandum notes that relevant considerations include "[w]hether the claim recites only the idea of a solution or outcome" rather than "details of how a solution to a problem is accomplished" or "a particular solution to a problem or a particular way to achieve a desired outcome," as well as "[w]hether the claim invokes computers or other machinery merely as a tool to perform an existing process," or whether "the claim purports to improve computer capabilities or to improve an existing technology."Id.
Summary of the rejection and Applicant’s position
Claims 1–20 are rejected under 35 U.S.C. § 101 as directed to an abstract idea (certain methods of organizing human activity, and/or mental processes) without additional elements that integrate the exception into a practical application, and without “significantly more.” Applicant argues the claims integrate the exception into a practical application by improving computer-implemented object recognition and indoor navigation via a two-level machine-learning feature extraction and similarity analysis pipeline feeding trained neural networks, combined with in-store camera analysis and guidance generation.
Step 2A, Prong One: Whether the claims recite a judicial exception
The claims, as drafted, recite: receiving digital media from a device at a POS terminal associated with a physical location; extracting “visual feature vectors” and “functional feature vectors”; inputting those feature vectors into trained neural networks to output product identifiers; determining items available in the store via camera-collected information; identifying target items similar to the depicted objects; and generating guidance to navigate to those items.
As claimed, the core concept is matching content in a user’s digital media to products available at a store and guiding the user to those products—i.e., facilitating shopping and navigation in a retail environment. This constitutes a method of organizing human activity (commercial interactions, marketing/sales assistance, customer guidance), and also includes mental-process type information analysis (identifying objects, comparing similarity, deciding targets) that is performed with generic computers. See MPEP 2106.04(a)(2); 2019 PEG, Section I(A) (examples of abstract ideas including certain methods of organizing human activity and mental processes such as observation, evaluation, and recommendation).
Step 2A, Prong Two: Whether the claims integrate the exception into a practical application
Applicant correctly cites the August 2019 Update’s holistic analysis. However, the claims here do not reflect a particular improvement to the functioning of a computer or to another technology/technical field; rather, they apply conventional image processing and machine learning (feature vectors, trained neural networks) to a retail matching/guidance workflow.
The claims do not recite any technological details indicating an improvement to the computer itself (e.g., an improvement in memory management, training methodology, inference efficiency, latency, accuracy under specific constraints, or a novel neural network architecture). Instead, they invoke neural networks as a tool to perform object recognition and product matching—activities that simply automate what humans could do conceptually. See MPEP 2106.05(a) (improvements to computer functionality must be reflected in the claim); 2106.04(d)(1) (look for a particular solution to a technological problem, not merely using a computer as a tool).
The “set of cameras,” “POS terminal,” and “guidance” elements are recited at a high level of generality without specific implementation details showing an improvement in camera technology, sensor fusion, indoor localization, or navigation systems (e.g., no recited calibration procedures, error models, mapping algorithms, or signal processing innovations). Merely using cameras to collect data and then “generating guidance” to navigate to items is an outcome-focused description of organizing in-store activity, not a particular technological solution. See MPEP 2106.05(f) (mere data gathering); 2106.05(e) (insignificant post-solution activity).
The claim does not tie the process to a “particular machine” in a manner that imposes meaningful limits (beyond generic use of a computing device, a POS terminal, and cameras), nor does it effect a transformation of an article. See MPEP 2106.05(b) and ©.
Accordingly, under Step 2A, Prong Two, the additional elements—considered individually and as an ordered combination—do not integrate the judicial exception into a practical application. The claims are directed to the abstract idea.
Step 2B: Whether the claims recite “significantly more”
Even if viewed as abstract under Step 2A, the claims must recite additional elements amounting to significantly more than the exception. The recited use of trained neural networks, feature extraction, cameras, and guidance generation are well-understood, routine, and conventional activities in the field of computer vision and retail guidance systems when claimed at this level of generality. See MPEP 2106.05(d); Berkheimer memorandum (examiners must support WURC determinations—support has been provided in the prior action via citations to conventional computer-vision pipelines and retail indoor navigation practices; Applicant has not provided contrary evidence showing non-conventionality of the claimed implementation).
Applicant has not identified claim language reflecting a non-conventional arrangement or an improvement to ML technology itself (e.g., specific training regimes that reduce overfitting, novel loss functions, specialized feature vector constructions with quantified performance gains, architectural constraints tailored to POS hardware, or real-time indoor localization innovations). Without such specifics, invoking NNs and feature vectors does not supply an “inventive concept.” See MPEP 2106.05(g) (adding a generic computer to perform the abstract idea does not provide significantly more).
Applicant further argues on 12 the following:
Moreover, as the August Memorandum reminds,"if it is a 'close call' as to whether a claim is eligible," an Examiner "should only make a rejection when it is more likely than not (i.e., more than 50%) that the claim is ineligible under 35 U.S.C. 101." Page 5. That is, "[a] rejection of a claim should not be made simply because an examiner is uncertain as to the claim's eligibility," and "[i]n order to make a rejection of a claim" as being directed to ineligible subject matter, "unpatentability must be established by a preponderance of the evidence." Page 5 (emphasis added). Applicant respectfully submits that the present claims are clearly eligible as discussed above, and that the present claims at least reflect a "close call" such that unpatentability cannot be established by a preponderance of the evidence, as required.
Moreover, in the recently released Decision on Request for Rehearing, issued by the Appeals Review Panel of the Patent Trial and Appeal Board in Ex parte Desjardins, the Appeals Review Panel explained that the claims at issue were directed to eligible subject matter because they improved the functioning of the computer, such as by improving the training of the model itself. Page 8. Applicant submits that the present claims similarly reflect improvements to the machine learning technology itself.
Additionally, the Appeals Review Panel noted that, under the reasoning presented in the rejection, "many Al innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable 'algorithm' and the remaining additional elements as 'generic computer components,' without adequate explanation."Id. at 9. The Appeals Review Panel reminded Examiners and panels to "not evaluate claims at such a high level of generality."Id. Applicant respectfully submits that the present rejections similarly "evaluate [the] claims at [] a high level of generality" without adequate explanation.
Therefore, Applicant respectfully submits that the amended claims as a whole include an improvement to a computer, specifically by improving the technical field of computer-implemented object recognition and indoor navigation. Therefore, the amended claims as a whole integrate the exception into a practical application such that the claims are not directed to the judicial exception.
Accordingly, Applicant submits that the claims 1, 12, and 20 are patent eligible under § 101 and respectfully requests that this rejection be withdrawn.
August Memorandum “close call” argument: Based on the current record, it is more likely than not that the claims are directed to an abstract idea and lack integration into a practical application. The claims are drafted at a business/process level and rely on generic ML and vision steps without reciting a particular technological improvement. Therefore, the rejection is appropriate under the preponderance standard. See 2019 PEG and August 2019 Update.
Improvement to “computer-implemented object recognition and indoor navigation”: The specification describes desired outcomes, but the claims do not reflect a specific improvement to how computers perform object recognition or indoor navigation. The claims call for extracting feature vectors and using trained NNs but do not claim new training techniques, model architectures, sensor fusion algorithms, or navigation computations that improve the technology itself.
Ex parte Desjardins: Applicant’s reliance on the ARP decision is noted. In Desjardins, eligibility turned on claims that improved the functioning of the computer/model itself (e.g., specific improvements to training or model operation). Here, the claims do not recite an improvement to training or model functioning; they recite using trained NNs and feature vectors to perform matching and guidance, which is application-level usage rather than a technological improvement. Accordingly, Desjardins is distinguishable on the facts.
Therefore, the § 101 rejection of claims 1–20 is maintained. The claims are directed to an abstract idea and do not recite additional elements that integrate the exception into a practical application, nor do they provide “significantly more.”
Applicant’s arguments, see pages 12-13, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of new prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1
Claims 1-20 are directed to a method, system and computer readable medium. Thus, each of the claims falls within one the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea.
Step 2A- Prong One
Independent claims 1, 12 and 20 recite steps that, under their broadest reasonable interpretations, cover certain methods of organizing human activity, e.g. advertising, marketing or sales activities. Specifically, claim 1 recites:
receiving a digital media from a computing device coupled to a point-of-sale (POS) terminal associated with a physical location;
identifying objects depicted within the digital media, comprising:
extracting visual feature vectors and functional feature vectors from the digital media, and
inputting the extracted visual and functional feature vectors into one or more trained neural networks (NNs) to output one or more product identifiers corresponding to the objects;
determining items currently available at the physical location by analyzing information collected by a set of cameras at the physical location;
identifying a set of target items, from the items currently available at the physical location, that are similar to at least one of the objects based on the visual and functional vector feature vectors extracted from the digital media and corresponding visual and functional vector feature vectors extracted from the collected information; and
generating a guidance that navigates to at least one of the set of target items within the physical location.
But for the recitation of generic computer components like digital media, computing device, point-of-sale (POS) terminal, neural networks (NNs) and cameras, the italicized functions, when considered as a whole, describe a situation where a person walks into a store or warehouse looking for certain items. If those certain items are not available, the person is guided to alternative or similar items that are available. Accordingly, claim 1 recites an abstract idea in the form of a certain method of organizing human activity.
Similarly, claims 12 and 20 recite:
one or more memories collectively storing computer-executable instructions; and
one or more processors configured to collectively execute the computer-executable instructions and cause the system to:
receive a digital media from a computing device coupled to a point-of-sale (POS) terminal associated with a physical location;
identify objects depicted within the digital media, comprising:
extracting visual feature vectors and functional feature vectors from the digital media, and
inputting the extracted visual and functional feature vectors into one or more trained neural networks (NNs) to output one or more product identifiers corresponding to the objects;
determine items currently available at the physical location by analyzing information collected by a set of cameras at the physical location;
identify a set of target items, from the items currently available at the physical location, that are similar to at least one of the objects based on the visual and functional vector feature vectors extracted from the digital media and corresponding visual and functional vector feature vectors extracted from the collected information; and
generate a guidance that navigates to at least one of the set of target items within the physical location.
But for the recitation of generic computer components like digital media, computing device, point-of-sale (POS) terminal, neural networks (NNs) and cameras, the italicized functions, when considered as a whole, describe a situation where a person walks into a store or warehouse looking for certain items. If those certain items are not available, the person is guided to alternative or similar items that are available. Accordingly, claims 12 and 20 recite an abstract idea in the form of a certain method of organizing human activity.
Dependent claims 2-11 and 13-19 inherit the limitations that recite an abstract idea from their dependence on claims 1 and 12, respectively, and thus these claims also recite an abstract idea under the Step 2A- Prong 1 analysis. In addition, claims 2-11 and 13-19 recite additional limitations that further describe the abstract idea identified in the independent claims. Examiner notes that dependent claims 2-11 and 13-19 recite some additional generic computer components like a devise of a user, neural networks, inventory database, artificial intelligence-based algorithms and a scanning camera which will be analyzed later.
Claims 2 and 13 recite wherein the guidance is displayed on at least one of (i) the POS terminal associated with the physical location or (ii) a device of a user, and the guidance is displayed along with information related to at least one of the set of target items. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claims 3 and 13 recite monitoring, via the set of cameras at the physical location, changes in status of at least one of the set of target items in real time, and updating the guidance based on the changes. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claims 4 and 13 recite accessing an inventory database to check inventory of at least one of the objects at one or more other physical locations; and generating an alternative purchase path to the user. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claims 5 and 13 recite wherein the set of cameras at the physical location are configured with artificial intelligence-based algorithms to determine at least one of (i) a category or (ii) a quantity of each of the items currently available at the physical location. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations and mathematical calculations, formulas or relationships)
Claims 6 and 13 recite wherein the set of target items comprises at least one of (i) one or more currently available items that are same as at least one of the objects, (ii) one or more currently available items that are visually similar to at least one of the objects, or (iii) one or more currently available items that are functionally similar to at least one of the objects. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claims 7 and 13 recite wherein the digital media may comprise at least one of an image, a video, a live stream, a three-dimensional model, or a motion graphic. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claims 8 and 13 recite wherein: the one or more neural networks are trained using historical received digital media as inputs, and labeled product identifiers as target outputs, and the one or more neural networks learn to correlate features from each respective digital media of the historical received digital media to a respective product identifier of the labeled product identifiers. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations and mathematical calculations, formulas or relationships)
Claim 9 recites receiving feedback from a user regarding an accuracy of the objects identified from the digital media; and refining the one or more neural networks based on the received feedback. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations and mathematical calculations, formulas or relationships)
Claim 10 recites transmitting an alert to a device of a user when the user approaches a location of an item from the set of target items. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Claim 11 recites wherein receiving the digital media comprises scanning, by the POS terminal, the digital media displayed on the computing device, and wherein the POS terminal comprises a scanning camera. (Commercial or legal interactions including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations)
Step 2A-Prong 1: YES. The claims are abstract
Step 2A- Prong Two
This judicial exception is not integrated into a practical application. In particular, independent claims 1, 12 and 20 do not include additional elements that integrate the abstract idea into a practical application. The additional elements that are cited one or more memories collectively storing computer-executable instructions, computing device, POS terminal, processors, digital media, neural network and cameras amount to mere instructions to implement an abstract idea on a computer, see applicant’s specification paragraphs 12-14, 64-66 and 75-80 and see MPEP 2106.05(f)).
The judicial exception recited in dependent claims 2-11 and 13-19 is also not integrated into a practical application under a similar analysis as above. The functions of claims 2-11 and 13-19 are performed with the same additional elements introduced in the independent claims, along with additional generic computer components like a device associated with a physical location, a devise of a user, neural networks, inventory database, artificial intelligence-based algorithms and a scanning camera however, even these additional elements amount to mere instructions to implement an abstract idea on a computer (see MPEP 2106.05(f)).
Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application.
Step 2B
Claims 1-20 does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements one or more memories collectively storing computer-executable instructions, processors, digital media, cameras, a device associated with a physical location, a devise of a user, neural networks, inventory database, artificial intelligence-based algorithms, computing device, POS terminal and a scanning camera amount to mere instructions to implement an abstract idea on a computer (see MPEP 2106.05(f)) as evidenced by Applicants specification paragraphs 12-14, 64-66 and 75-80. The computer components above are described at a very high level and generic in nature such that one of ordinary skill in the art would understand that generic computer components could be used to implement the invention.
Step 2B: NO. The claims do not provide significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Schack et al., US Patent No. 11,783,400 (see PTO-892, Ref. C) in view of Keith, US Patent No. 10,915,906 (ss PTO-892, Ref. A) and further in view of Truitner, US Patent No. 11,907,841 (see PTO-892, Ref. D).
As per claim 1, Schack teaches
determining items currently available at the physical location by analyzing information collected by a set of cameras at the physical location (see col. 27, lines 53-67 and col. 28, lines 1-14);
identifying a set of target items, from the items currently available at the physical location, that are similar to at least one of the objects based on the digital media and the collected information (see col. 29, lines 31-58); and
generating a guidance that navigates the user to at least one of the set of target items within the physical location (see col. 41, lines 8-34).
Schack does not explicitly teach receiving a digital media from a computing device coupled to a point-of-sale (POS) terminal associated with a physical location, identifying objects depicted within the digital media, comprising: extracting visual feature vectors and functional feature vectors from the digital media, inputting the extracted visual and functional feature vectors into one or more trained neural networks (NNs) to output one or more product identifiers corresponding to the objects and identifying a set of target items, from the items currently available at the physical location, that are similar to at least one of the objects based on the visual and functional vector feature vectors extracted from the digital media and corresponding visual and functional vector feature vectors extracted from the collected information.
Keith teaches receiving a digital media from a computing device coupled to a point-of-sale (POS) terminal associated with a physical location (see col. 16, lines 10-16).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack and Keith to receive digital media from a computing device coupled to a point-of-sale (POS) terminal associated with a physical location because it would assist a consumer to pick up the item at a physical location (see col. 22, lines 61-65).
Truitner teaches identifying objects depicted within the digital media, comprising: extracting visual feature vectors and functional feature vectors from the digital media (see column 10, lines 8-67 and column 11, lines 1-51),
inputting the extracted visual and functional feature vectors into one or more trained neural networks (NNs) to output one or more product identifiers corresponding to the objects (see column 11, lines 12-30) and
identifying a set of target items, from the items currently available at the physical location, that are similar to at least one of the objects based on the visual and functional vector feature vectors extracted from the digital media and corresponding visual and functional vector feature vectors extracted from the collected information (see column 10, lines 8-67 and column 11, lines 1-51).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack and Truitner to identify items based on digital media visual and functional vectors or features because it would assist a consumer to identify a products easier as taught by Truitner (see col. 9, lines 9-15).
Regarding claim 2, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches wherein the guidance is displayed on at least one of (i) the POS terminal associated with the physical location or (ii) a device of a user, and the guidance is displayed along with information related to at least one of the set of target items (see col. 3, lines 57-67- col. 4, lines 1-21 and col. 6, lines 50-52).
Regarding claim 3, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches monitoring, via the set of cameras at the physical location, changes in status of at least one of the set of target items in real time, and updating the guidance based on the changes (see col. 45, lines 32-67- col. 46, lines 1-41).
Regarding claim 4, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches accessing an inventory database to check inventory of at least one of the objects at one or more other physical locations; and generating an alternative purchase path (see col. 46, lines 16-41 and col. 49, lines 9-55).
Regarding claim 5, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches wherein the set of cameras at the physical location are configured with artificial intelligence-based algorithms to determine at least one of (i) a category or (ii) a quantity of each of the items currently available at the physical location (see col. 12, lines 3-67- col. 13, lines 1-7).
Regarding claim 6, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches wherein the set of target items comprises at least one of (i) one or more currently available items that are same as at least one of the objects, (ii) one or more currently available items that are visually similar to at least one of the objects, or (iii) one or more currently available items that are functionally similar to at least one of the objects (see col. 29, lines 31-67- col. 30, lines 1-26).
Regarding claim 7, Schack, Keith and Truitner teach the method of claim 1 as seen above. Keith further teaches wherein the digital media may comprise at least one of an image (see col. 16, lines 10-16).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack and Keith to receive digital media comprising of an image because it would assist a consumer to pick up the item at a physical location (see col. 22, lines 61-65).
As per claim 8, Schack, Keith and Truitner teach the method of claim 1 as seen above. Truitner further teaches wherein:
the one or more neural networks are trained using historical received digital media as inputs, and labeled product identifiers as target outputs (see column 10, lines 8-67 and column 11, lines 1-51), and
the one or more neural networks learn to correlate features from each respective digital media of the historical received digital media to a respective product identifier of the labeled product identifiers (see column 10, lines 8-67 and column 11, lines 1-51).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack and Truitner to train neural networks with digital media inputs because it would assist a consumer to identify a products easier as taught by Truitner (see col. 9, lines 9-15).
As per claim 9, Schack, Keith and Zheng teach the method of claim 8 as seen above. Truitner further teaches comprising
receiving feedback from the user regarding an accuracy of the objects identified from the digital media; and
refining the one or more neural networks based on the received feedback (see col 11, lines 22-30 and col 16, lines 24-38).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack, Keith Zheng with Truitner to train neural networks with feedback regarding a product because it would allow for improvements to the digital media image as taught by Truitner (see col 16, lines 33-38).
Regarding claim 10, Schack, Keith and Truitner teach the method of claim 1 as seen above. Schack further teaches transmitting an alert to a device of a user when the user approaches a location of an item from the set of target items (see col. 52, lines 12-35).
Regarding claim 11, Schack, Keith and Truitner teach the method of claim 1 as seen above. Keith further teaches wherein receiving the digital media comprises scanning, by the POS terminal, the digital media displayed on the computing device, and wherein the POS terminal comprises a scanning camera. (see col. 22, lines 66-67, col. 23, lines 1-17 and col. 37, lines 17-24 and 54-59).
Therefore, it would be prima facie obvious to a person of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Schack and Keith to scan digital media displayed on a user device by a scanning device at a physical location because it would help a retailer with loss prevention initiatives as taught by Keith (see col. 22, lines 66-67 and col. 23, lines 1-17).
Claims 12 and 20 recite similar limitations to claim 1 and thus rejected using the same art and rationale in the rejection of claim 1 as set forth above.
Claim 13 recites similar limitations to claim 2 and thus rejected using the same art and rationale in the rejection of claim 2 as set forth above.
Claim 14 recites similar limitations to claim 3 and thus rejected using the same art and rationale in the rejection of claim 3 as set forth above.
Claim 15 recites similar limitations to claim 4 and thus rejected using the same art and rationale in the rejection of claim 4 as set forth above.
Claim 16 recites similar limitations to claim 5 and thus rejected using the same art and rationale in the rejection of claim 5 as set forth above.
Claim 17 recites similar limitations to claim 6 and thus rejected using the same art and rationale in the rejection of claim 6 as set forth above.
Claim 18 recites similar limitations to claim 7 and thus rejected using the same art and rationale in the rejection of claim 7 as set forth above.
Claim 19 recites similar limitations to claim 8 and thus rejected using the same art and rationale in the rejection of claim 8 as set forth above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID R MERCHANT whose telephone number is (571)270-1360. The examiner can normally be reached M-F 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Namrata Boveja can be reached at 571-272-8105. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684