Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED CORRESPONDENCE
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Status of Claims
Claims 1, 2, 6, 7, 8, 9, 12, 13, 14, 15, 16, 17, 18, 19 have been amended.
Claims 3, 4 have been cancelled.
No claims have been added.
Claim Objections
Claim 8 is objected to because of the following informalities: “an” should be “a” before “non-AI”. Appropriate correction is required.
Claim 17 is objected to because of the following informalities: “an” should be included before “an item under consideration” in the fourth limitation, last line. Appropriate correction is required.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 2, 5 – 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
In regards to claim 1, the metes and bounds of “machine destinations” are unknown because the specification does not provide any definition of what this concept is supposed to be. As a result, the Examiner is further unable to determine how a database is determined from the machine destinations because “machine destinations” is unknown.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 2, 5 – 17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
In regards to claims 1, 2, 5 – 17, the Examiner asserts that the following is new matter:
Claim 1:
“one or more machine destinations”
Claim 12:
“without requiring human expert”
With regards to claim 1, the specification fails to disclose “machine destinations” or provide any definition to determine equivalent language.
With regards to claim 12, the claim is encompassing all types of human experts while the specification only provides support for in-store experts (¶ 4). The claimed invention is directed toward not requiring any human expert, i.e. in-store, out of the store, online, or etc. The specification only provides support that an in-store expert, presumably, an employee, personnel of the store, or the like, is not required, but does not prohibit the user from involving another human, e.g., customer service representative, friend, human outside the store, and etc.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 5 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite:
Claim 1:
determining destinations of a user-provided image of an item by comparing the image to one or more authentic images of the item based on an initial categorization of the item;
comparing the image to one or more authentic images of the item containing reference images of the item, feature descriptions of the item, and a location metric associated with the item being considered by the user;
perform an authenticity evaluation and provide a level of authenticity of the item;
weighting the level of authenticity of the item;
forwarding the score and storing a response from a recipient of the score;
instruct the user to take one or more images of the item to be evaluated;
provide a rapid and improved measure of authenticity of an item being considered by the user
Claim 12:
instruct the user to take at least one of images and a video of an item desired for purchase;
sending the least one of images and video to evaluates the item within the at least one of images and video;
determining the item type and brand;
comparing the at least one of images and video with images of an authentic item, to dynamically perform a weight comparison based on item features and generate a rapid authenticity determination;
generating at least one of a score and score range based on the comparing and weighting and rapid authenticity determination; and
sending the at least one of score and score range to the user,
wherein the method provides a rapid and improved measure of authenticity of the item for the user
Claim 17:
instruct a user to take one or more images an item under consideration [send the image];
evaluating received one or more images, perform at least one of image correction, adjustment, and quality control of the user-provided images, and return instructions to the user if additional images or information is required, or a preliminary authentication score;
providing instructions and prompts to obtain authentication scores from each analysis and to compare the authentication scores to arrive at a weighted authentication score; and
sending the weighted authentication score, for rapid evaluation of the item
The invention is directed towards the abstract idea of collecting and comparing images of a product to determine whether the product is authentic, which corresponds to “Mental Processes” and “Certain Methods of Organizing Human Activities” as it is directed towards steps that can be performed by a human(s) with the aid of pen and paper, e.g., having a user provide a picture of a product to a second user and having the second user (NOTE: a second user is not required as a single user can compare two pictures of a product to determine if they match) look at the picture and compare it against a picture of a verified authentic item and, based on the comparison/rule(s), determine whether the product is authentic and assigning it a score reflecting the level of certainty that the product is authentic (risk mitigation). Alternatively, the invention can be performed by a human drawing a picture of a product in question and comparing it against a verified drawing/picture of the product and, based on a visual comparison, determine whether the product is authentic based on a level of confidence (risk mitigation).
The limitations of:
Claim 1:
determining destinations of a user-provided image of an item by comparing the image to one or more authentic images of the item based on an initial categorization of the item;
comparing the image to one or more authentic images of the item containing reference images of the item, feature descriptions of the item, and a location metric associated with the item being considered by the user;
perform an authenticity evaluation and provide a level of authenticity of the item;
weighting the level of authenticity of the item;
forwarding the score and storing a response from a recipient of the score;
instruct the user to take one or more images of the item to be evaluated;
provide a rapid and improved measure of authenticity of an item being considered by the user
Claim 12:
instruct the user to take at least one of images and a video of an item desired for purchase;
sending the least one of images and video to evaluates the item within the at least one of images and video;
determining the item type and brand;
comparing the at least one of images and video with images of an authentic item, to dynamically perform a weight comparison based on item features and generate a rapid authenticity determination;
generating at least one of a score and score range based on the comparing and weighting and rapid authenticity determination; and
sending the at least one of score and score range to the user,
wherein the method provides a rapid and improved measure of authenticity of the item for the user
Claim 17:
instruct a user to take one or more images an item under consideration [send the image];
evaluating received one or more images, perform at least one of image correction, adjustment, and quality control of the user-provided images, and return instructions to the user if additional images or information is required, or a preliminary authentication score;
providing instructions and prompts to obtain authentication scores from each analysis and to compare the authentication scores to arrive at a weighted authentication score; and
sending the weighted authentication score, for rapid evaluation of the item,
are processes that, under its broadest reasonable interpretation, covers performance of the limitation performed by a human(s), in the human mind, and/or with the aid of pen and paper, but for the recitation of a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network in the context of this claim encompasses having a user provide a picture of an product to a second user and having the second user (NOTE: a second user is not required as a single user can compare two pictures of a product to determine if they match) look at the picture and compare it against a picture of a verified authentic item and, based on the comparison/rule(s), determine whether the product is authentic and assigning it a score reflecting the level of certainty that the product is authentic. Alternatively, the invention can be performed by a human drawing a picture of a product in question and comparing it against a verified drawing/picture of the product and, based on a visual comparison, determine whether the product is authentic based on a level of confidence. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network, then it falls within the “Mental Processes” and “Certain Methods of Organizing Human Activities” groupings of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – a generic user (smart) (computing) device, a generic server, a generic App, and generic Cloud Manager module to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network to communicate and capture information, as well as performing operations that a human can perform in their mind and/or pen and paper, i.e. comparing information and provide an assessment based on the comparison. The generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network in the steps are recited at a high-level of generality (i.e., as a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network can perform the insignificant extra solution steps of communicating information (See MPEP 2106.05(g) while also reciting that the a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network are merely being applied to perform the steps that can be performed by a human(s), in the human mind, and/or with the aid of pen and paper; "[use] of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Therefore, according to the MPEP, this is not solely limited to computers but includes other technology that, recited in an equivalent to “apply it,” is a mere instruction to perform the abstract idea on that technology (See MPEP 2106.05(f)) such that it amounts no more than mere instructions to apply the exception using a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, a generic Cloud Manager module, and a generic communication network. That is, other than reciting a generic user (smart) (computing) device, a generic server, a generic App to capture a picture using a generic camera and transmit the picture using a generic communication interface, and a generic communication network to perform the steps of:
Claim 1:
determining destinations of a user-provided image of an item by comparing the image to one or more authentic images of the item based on an initial categorization of the item;
comparing the image to one or more authentic images of the item containing reference images of the item, feature descriptions of the item, and a location metric associated with the item being considered by the user;
perform an authenticity evaluation and provide a level of authenticity of the item;
weighting the level of authenticity of the item;
forwarding the score and storing a response from a recipient of the score;
instruct the user to take one or more images of the item to be evaluated;
provide a rapid and improved measure of authenticity of an item being considered by the user
Claim 12:
instruct the user to take at least one of images and a video of an item desired for purchase;
sending the least one of images and video to evaluates the item within the at least one of images and video;
determining the item type and brand;
comparing the at least one of images and video with images of an authentic item, to dynamically perform a weight comparison based on item features and generate a rapid authenticity determination;
generating at least one of a score and score range based on the comparing and weighting and rapid authenticity determination; and
sending the at least one of score and score range to the user,
wherein the method provides a rapid and improved measure of authenticity of the item for the user
Claim 17:
instruct a user to take one or more images an item under consideration [send the image];
evaluating received one or more images, perform at least one of image correction, adjustment, and quality control of the user-provided images, and return instructions to the user if additional images or information is required, or a preliminary authentication score;
providing instructions and prompts to obtain authentication scores from each analysis and to compare the authentication scores to arrive at a weighted authentication score; and
sending the weighted authentication score, for rapid evaluation of the item,
amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Additionally:
Claim 2 is directed towards reciting generic technology at a high level of generality and applying it to the abstract idea.
Claims 1, 6, 7, 10, 11, 12, 18 are directed towards reciting generic technology at a high level of generality and applying it to the abstract idea.
Although the claim recites “artificial intelligence”/“machine learning” the claims and specification fail to provide sufficient disclosure regarding an improvement to how “artificial intelligence”/“machine learning” can be trained, but simply recites a high-level generic recitation that “artificial intelligence”/“machine learning” is being trained. There is insufficient evidence from the specification to indicate that the use of the “artificial intelligence”/“machine learning” involves anything other than the generic application of a known technique or that the claimed invention purports to improve the functioning of the computer itself or the “artificial intelligence”/“machine learning”. None of the limitations reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field, applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, effects a transformation or reduction of a particular article to a different state or thing, or applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Even training and applying “artificial intelligence”/“machine learning” is simply application of a computer model, itself an abstract idea manifestation. Further, such training and applying of a model is no more than putting data into a black box “artificial intelligence”/“machine learning” operation. The nomination as being “artificial intelligence”/“machine learning” is a functional label, devoid of technological implementation and application details. The specification does not contend it invented any of these activities, or the creation and use of such “artificial intelligence”/“machine learning”. In short, each step does no more than require a generic computer to perform generic computer functions. As to the data operated upon, "even if a process of collecting and analyzing information is 'limited to particular content' or a particular 'source,' that limitation does not make the collection and analysis other than abstract." SAP America, Inc. v. InvestPic LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018).
The Examiner asserts that the scope of the disclosed invention, as presented in the originally filed specification, is not directed towards the improvement of “artificial intelligence”/“machine learning”, but directed towards the collection and comparison of product images and, based on the comparison, determine a level of matching, which, in then used to determine how confident the product is authentic. The specification’s disclosure on “artificial intelligence”/“machine learning” is nothing more than a high general explanation of generic technology and applying it to the abstract idea. Referring to MPEP § 2106.05(f), the recitation and application of “artificial intelligence”/“machine learning” are merely being used to facilitate the tasks of the abstract idea, which provides nothing more than a results-oriented solution that lacks detail of the mechanism for accomplishing the result and is equivalent to the words “apply it,” per MPEP § 2106.05(f). The Examiner asserts that in light of the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, the claimed invention is analogous to Example 47, Claim 2.
Further, the combination of these elements is nothing more than a generic computing system with “artificial intelligence”/“machine learning”. Because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP § 2106.05(f), they do not integrate the abstract idea into a practical application.
Additionally, claim 10 also recites human activities and extra solution activities, in this case, forwarding a user’s decision.
Claim 5 is directed towards reciting generic technology at a high level of generality and applying it to the abstract idea.
Claim 8 is collecting and comparing information and providing the results of the comparison.
Claim 9 is directed towards describing user provided information.
Claims 13 – 16, 19, 20 recite subject matter similar to what has already been discussed above.
In summary, the dependent claims are simply directed towards providing additional descriptive factors that are considered for determining if a product is authentic. Accordingly, the claims are not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 5 – 20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Guinard et al. (US PGPub 2021/0142337 A1).
In regards to claim 1, Guinard discloses a smart electronic device, user directed item authenticity evaluation system, comprising:
a computer-readable non-transitory medium having encoded thereon computer-executable instructions to (Fig. 1; ¶ 34, 38):
electronically determining one or more machine destinations of a user-provided image of an item based on an initial categorization of the item (Aa best understood, in light of the rejections under 35 USC 112, ¶ 58 wherein the product being evaluated is compared against products in a similar category, i.e. the system determines the product category so that it knows what to compare it against (i.e. other products in a similar category); Fig. 6, 10; ¶ 103, 106 the system is also in communication with a plurality of data sources to facilitate the authenticity analysis);
electronically comparing the image to one or more authentic images of the item in a database containing reference images of the item, feature descriptions of the item, and a location metric associated with the item being considered by the user, wherein the database is from the determined one or more machine destinations (¶ 50, 56, 71 wherein user provided images of an item are compared against an authentic image of the item; ¶ 50, 52, 56, 71, 93, 114, 115 wherein the system includes a database containing reference images of the item; ¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes a description of the item, location of the item, and etc.; ¶ 58 wherein the product being evaluated is compared against products in a similar category);
determine electronically prompting a plurality of artificial intelligence (AI) engines to perform an authenticity evaluation and provide a level of authenticity of the item (¶ 59, 71, 105 wherein a level of authenticity is determined; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item);
electronically weighting the level of authenticity of the item (¶ 58, 117 wherein the authenticity score may be a weighted sum of values);
electronically generate a score correlated to the level of authenticity determined (¶ 59, 71, 105 wherein an authenticity score is generated and correlated to the determined level of authenticity by the system);
electronically forwarding the score and storing a response from a recipient of the score (Fig. 6; ¶ 120 wherein the user’s device has an App to allow the user to take images of the item and allows for the image to be transmitted to a central system for the authenticity evaluation; ¶ 60, 75 wherein the central system determines the authenticity of the item and based on the score the transaction is cancelled; Claim 1 wherein the system provides the score to the user’s device; ¶ 51; Claim 9; Claim 14 wherein the assessment of a product’s authenticity is stored for future analysis); and
an App installed on a portable smart device having image taking capabilities, wherein the App provides a client-side interface for the system and is configured to instruct the user to take one or more images of the item, to be evaluated by the system (Fig. 6; ¶ 120 wherein the user’s device has an App to allow the user to take images of the item and allows for the image to be transmitted to a central system for the authenticity evaluation),
wherein the system provides a rapid and improved measure of authenticity of an item being considered by the user (¶ 60, 75 wherein the central system determines the authenticity of the item and utilizes various technologies, as discussed above, in order to provide a rapid and improved measure of authenticity of an item being considered by the user).
In regards to claim 2, Guinard discloses the system of claim 1, further comprising:
a communications network;
the portable smart device of the user transmitting data to the communications network; and
a server connected to the communications network, wherein the server is configured to execute one or more instructions of the computer-readable non-transitory memory
(Fig. 1; ¶ 34, 38, 41, 110 wherein the system comprises a communication network, a smart device having an app to allow for the device to connect to the communication network, and a centralized system to perform the authenticity analysis).
In regards to claim 5, Guinard discloses the system of claim 1, wherein the computer-executable instructions further include instructions to perform at least one of an image correction, adjustment, and quality control of the user-provided image (¶ 116, 124 wherein the system performs adjustments or quality control of the user provided image to further refine its authenticity determination).
In regards to claim 6, Guinard discloses the system of claim 3, wherein the computer-executable instructions further comprises:
instructions to perform at least one of a dynamic generated prompt to the plurality of artificial intelligence (AI) engines, with prompt parameters obtained from at least one of the server and database; and
instructions to perform a weighting of the level of authenticity based on a response by the user
(¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item; ¶ 58, 117 wherein the authenticity score may be a weighted sum of values; Fig. 6, 10; ¶ 103, 106 the system is also in communication with a plurality of data sources to facilitate the authenticity analysis; ¶ 54 wherein user information is provided to assist with the analysis; ¶ 102 wherein user provided information is utilized to assist with the analysis).
In regards to claim 7, Guinard discloses the system of claim 4, wherein results of the transmitted data are weighted (¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score).
In regards to claim 8, Guinard discloses the system of claim 1, further comprising, an non-AI analysis engine in communication to the system, providing at least one of the comparing the image to the one or more authentic images of the item and determining a level of authenticity of the item (¶ 49, 86, 93, 102 wherein the product’s barcode can be utilized to assist with determining authenticity).
In regards to claim 9, Guinard discloses the system of claim 1, wherein the computer-executable instructions further include instructions to the user for additional information of at least one of different user-provided images, a description of the item, a name of location venue, and stated price of the item, wherein information received from the user is used to assist in authentication (¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes, at least, a description of the item, location of the item, and etc.).
In regards to claim 10, Guinard discloses the system of claim 1, further comprising a machine learning model, wherein the App forwards a buy or not-buy decision from the user to the machine learning model (¶ 60, 75 wherein the user, in response to being notified of the item’s authenticity, provides a decision to cancel or discontinue a financial transaction, i.e. purchase of the product).
In regards to claim 11, Guinard discloses the system of claim 10, wherein at least one or more of the instructions are located on the machine learning model, wherein the machine learning model dynamically alters the evaluation and determination instructions (¶ 126 wherein the production authentication improves over time as more data is collected and models are improved).
In regards to claim 12, Guinard discloses an electronic device based method of providing an electronically derived confidence level of authenticity of an item to a customer, without requiring human expert (¶ 35 wherein a human expert is not required), comprising:
In regards to:
installing an App on a user's smart device, the App configured to instruct the user to take at least one of images and a video of an item desired for purchase;
sending the least one of images and video to a server-run application that evaluates the item within the at least one of images and video
(Fig. 6; ¶ 120 wherein the user’s device has an App to allow the user to take images of the item and allows for the image to be transmitted to a central system for the authenticity evaluation; ¶ 50, 56, 71 wherein user provided images of an item are compared against an authentic image of the item);
electronically determining the item type and the brand within at least one of the App and the server-run application (¶ 45, 53, 59, 61, 93, 104 wherein the brand and item type are determined by the system);
electronically comparing via an artificial intelligence (AI) engine the at least one of images and video with images of an authentic item, to dynamically perform a weight comparison based on item features and generate a rapid authenticity determination (¶ 50, 56, 71 wherein user provided images of an item are compared against an authentic image of the item; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item; ¶ 60, 75 wherein the central system determines the authenticity of the item and utilizes various technologies, e.g., AI, in order to provide a rapid and improved measure of authenticity of an item being considered by the user; ¶ 58, 117 wherein the authenticity score may be a weighted sum of values; ¶ 50, 52, 56, 71, 93, 114, 115 wherein the system includes a database containing reference images of the item; ¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes a description of the item, location of the item, and etc.; ¶ 58 wherein the product being evaluated is compared against products in a similar category);
electronically generating at least one of a score and score range based on the comparing and weighting and rapid authenticity determination (¶ 59, 71, 105 wherein an authenticity score is generated and correlated to the determined level of authenticity; ¶ 58, 117 wherein the authenticity score may be a weighted sum of values; ¶ 60, 75 wherein the central system determines the authenticity of the item and utilizes various technologies, as discussed above, in order to provide a rapid and improved measure of authenticity of an item being considered by the user); and
electronically sending the at least one of score and score range to the App on the user's smart device (¶ 60, 75 wherein the user is notified of the item’s authenticity score),
wherein the method provides a rapid and improved measure of authenticity of the item for the user (¶ 59, 71, 105 wherein an authenticity score is generated and correlated to the determined level of authenticity and utilizes various technologies, as discussed above, in order to provide a rapid and improved measure of authenticity of an item being considered by the user).
In regards to claim 13, Guinard discloses the method of claim 12, wherein the comparing is accomplished by forwarding the at least one of images and video to the AI engine performing the evaluation, wherein the forwarding is based on a dynamic generated prompt using a machine learning model (¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item).
In regards to claim 14, Guinard discloses the method of claim 13, wherein the at least one score and score range is weighted (¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score).
In regards to claim 15, Guinard discloses the method of claim 12, wherein images of an authentic item are stored on a database of the server running the evaluation or a database not local to the server (¶ 50, 52, 56, 71, 89, 93, 105, 114, 115 wherein the system includes a database containing, at least, reference images of the item and wherein the information can be retrieved from a plurality of different sources not local to the system, e.g., manufacturer, supplier, and supply chain).
In regards to claim 16, Guinard discloses the method of claim 12, wherein the sending the at least one of score and score range to the App, includes sending a request for additional information of at least one of different user-provided images, a description of the item, a name of location venue, and stated price of the item, wherein information received from the user is used to assist in authentication (¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes, at least, a description of the item, location of the item, and etc.).
In regards to claim 17, Guinard discloses a genuine item electronic device based scanning authentication system, by an individual for rapidly evaluating an item for purchase, comprising:
In regards to:
a communications network;
a server connected to the communication network;
portable smart device configured to communicate to the communications network
(Fig. 1; ¶ 34, 38, 41, 110 wherein the system comprises a communication network, a smart device having an app to allow for the device to connect to the communication network, and a centralized system to perform the authenticity analysis);
an App installed on the portable smart device, having image taking capabilities, wherein the App provides a client-side interface for the server and is configured to instruct a user to take one or more images an item under consideration, to be sent to the server (Fig. 6; ¶ 120 wherein the user’s device has an App to allow the user to take images of the item and allows for the image to be transmitted to a central system for the authenticity evaluation);
an intake module hosted by the server, evaluating received one or more images from App, wherein the intake module is configured to perform at least one of image correction, adjustment, and quality control of the user-provided images, and return instructions to the user if additional images or information is required, or a preliminary authentication score (¶ 116, 124 wherein the system performs adjustments or quality control of the user provided image to further refine its authenticity determination; ¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes, at least, a description of the item, location of the item, and etc.);
a Cloud Manager module, directing information from the App and the intake module to a plurality of computerized analysis engines (¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item; ¶ 121, 136 wherein the system is a cloud-based system);
an authentication module, providing instructions and prompts to the plurality of computerized analysis engines to obtain authentication scores from each computerized analysis engine and to compare the authentication scores from each computerized analysis engine to arrive at a weighted authentication score (¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item; ¶ 58, 117 wherein the authenticity score may be a weighted sum of values; ¶ 59, 71, 105 wherein an authenticity score is generated and correlated to the determined level of authenticity by the system; ¶ 49, 86, 93, 102 wherein the product’s barcode can be utilized to assist with determining authenticity); and
a reply module hosted by the server, sending the weighted authentication score to at least one of the App and the portable smart device, for rapid evaluation of the item (¶ 60, 75 wherein the user is notified of the item’s authenticity score and utilizes various technologies, as discussed above, in order to provide a rapid and improved measure of authenticity of an item being considered by the user).
In regards to claim 18, Guinard discloses the system of claim 17, wherein the Cloud Manager module is in a form of dynamic generated prompt with Machine Learning Model adjusted queries to at least one or more artificial intelligence engines connected to the communications network (¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score; ¶ 121, 136 wherein the system is a cloud-based system; ¶ 58, 59, 71, 105, 112 wherein the system utilizes one or more machine learning models and/or one or more neural networks to determine the authenticity of the item).
In regards to claim 19, Guinard discloses the system of claim 18, further comprising one more databases containing at least one of the images of an authentic item, characteristics of the item, and a location parameter of the item under consideration, for use in the weighted authentication score (¶ 52, 53, 58, 59, 68, 114, 115 wherein additional information that the system can utilize to determine authenticity includes, at least, a description of the item, location of the item, and etc.; ¶ 52, 55, 56, 57, 58, 59, 121, 126 wherein the system is prompted to utilize machine learning to analyze a user provided image of an item to determine whether the item is authentic and will weigh various parameters to determine an authenticity score;).
In regards to claim 20, Guinard discloses the system of claim 19, further comprising, storing a buy or not buy decision by the user (¶ 60, 75 wherein the user, in response to being notified of the item’s authenticity, provides a decision to cancel or discontinue a financial transaction, i.e. purchase of the product).
Response to Arguments
Applicant's arguments filed 2/17/2026 have been fully considered but they are not persuasive.
Claim Objections
The objections to the claims have been withdrawn due to amendments.
A new objection has been provided due to amendments.
Rejection under 35 USC 101
The rejection under 35 USC 101 has been maintained.
The Examiner asserts that the claimed invention fails to improve technology, resolve an issue that arose in technology, or deeply rooted in technology, but directed towards reciting generic technology at a high level of generality and applying it to the abstract idea to determine whether a product is authentic, which is based on collecting and comparing information and, based on a rule(s), identify options (i.e. authenticity).
Desjardins does not apply because the claimed invention is not improving image processing technology or artificial intelligence, but, as stated above, recited the technology at a high level of generality and applying it to the abstract idea. The claimed invention is relying on the generic technology that such technology provides, i.e. faster, more efficient, and etc. The claimed invention is directed towards comparing pictures of a product to determine whether they match and, if not, determine that the product is not authentic, otherwise, determining that the product is authentic. The claimed invention is not directed towards nor recites any technological improvements to resolve or improve image recognition technology. Similarly, artificial intelligence is not being improved upon, but recited ag a high level of generality and applied to the claimed invention in order perform activities that a human can perform, i.e. compare pictures.
The Examiner asserts that the claimed invention’s recitation and application of artificial intelligence does not rise to the level nor meet the guidelines of Desjardens because the claimed invention is not improving upon nor resolving an issue that arose in artificial intelligence, but, again, reciting generic artificial intelligence at a high level of generality and applying it to the abstract idea of collecting and comparing information and, based on a rule(s), determine if the product pictures match. The applicant further admits on Page 8 that they did “not claim to invent authentication, nor AI, nor comparing of images.” As a result, when reviewing the claimed invention individually and/or as an order combination, the Examiner asserts that the claimed invention is reciting and applying generic technology for the benefits that the technology provides, i.e. faster, more efficient, and etc. and not directed towards improving technology, resolving an issue that arose in technology, or deeply rooted in technology. The claimed invention recites that it provides “rapid and improved” measurements of authenticity, but fails to disclose how the technology has been improved upon to achieve this result other than, again, relying on the benefits that technology provides. As a result, the claimed invention is also directed towards an idea of a solution (MPEP § 2106.05(f)).
The invention is directed towards the abstract idea of collecting and comparing images of a product to determine whether the product is authentic, which corresponds to “Mental Processes” and “Certain Methods of Organizing Human Activities” as it is directed towards steps that can be performed by a human(s) with the aid of pen and paper, e.g., having a user provide a picture of an product to a second user and having the second user (NOTE: a second user is not required as a single user can compare two pictures of a product to determine if they match) look at the picture and compare it against a picture of a verified authentic item and, based on the comparison/rule(s), determine whether the product is authentic and assigning it a score reflecting the level of certainty that the product is authentic. Alternatively, the invention can be performed by a human drawing a picture of a product in question and comparing it against a verified drawing/picture of the product and, based on a visual comparison, determine whether the product is authentic based on a level of confidence.
Rejection under 35 USC 102
The Examiner asserts that the applicant’s arguments are directed towards newly amended limitations and are, therefore, considered moot. However, the Examiner has responded to the newly submitted amendments, which the arguments are directed to, in the rejection above, thereby addressing the applicant’s arguments.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached PTO-892 Notice of References Cited.
Gonta et al. (US PGPub 2025/0343700 A1); Ruvini et al. (US Patent 12,488,356 B2); Sumpter et al. (US Patent 12,450,612 B2); Keren (EP 3718019 B1) – which disclose system and methods for analyzing images of products to determine their authenticity
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERARDO ARAQUE JR whose telephone number is (571)272-3747. The examiner can normally be reached Monday - Friday 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at 571-270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GERARDO ARAQUE JR
Primary Examiner
Art Unit 3629
/GERARDO ARAQUE JR/Primary Examiner, Art Unit 3629 3/12/2026