DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
The amendment received on 11 August 2025 has been acknowledged and entered.
Claims 1, 4-6, 8-11, 14-18, and 20 have been amended.
Claims 2-3, 7, 12-13, and 19 have been canceled. No new claims have been added.
Claims 1, 4-6, 8-11, 14-18, and 20 are currently pending.
Response to Amendments and Arguments
Applicant's amendment filed 11 August 2025, with respect to the objection to claim 12 has been fully considered and is persuasive due to Applicant’s cancelation of claim 12. Thus, the objection to claim 12 has been withdrawn.
Applicant's arguments filed 11 August 2025 with respect to the rejection of claims 1, 4-6, 8-11, 14-18, and 20 under 35 U.S.C. 101 have been fully considered but they are not persuasive.
Applicant argues (in REMARKS, page 8) that the Applicant respectfully submits that amended claim 1 is patent-eligible under 35 U.S.C. 101. The claim is directed to a specific technological improvement to a point- of-sale (POS) system and is not directed to a patent-ineligible abstract idea. The invention provides a practical and specific solution to the technological problem of how to automatically, consistently, and dynamically price items based on their real-time physical condition, for example, discoloration or damage. This is achieved through a specific arrangement of technical components, including a portable terminal, a store server, and a specially trained machine-learning model, that work in concert to transform image data into a dynamically calculated price displayed within a specific, interactive user interface.
In response to Applicants argument, the Examiner respectfully disagrees and notes that Applicant appears to be referencing a business solution to a business problem by use of the generic computer components (the portable terminal, store server, ML model) as tools to implement the abstract idea for setting the selling price of an item based on the condition of the item. The Examiner suggests amending the claims to include the mechanism which provides an overall improvement to the functioning of the POS system, portable terminal, or store server.
Applicant argues (in REMARKS, pages 8-9) that the Examiner has characterized the claim as being directed to the abstract ideas of a "Mental Process," "Managing Personal Behavior," and "Commercial Interactions." The Applicant respectfully disagrees. The claimed subject matter is not directed to these concepts themselves but to a specific technological method and system for performing a function that is impossible for a human to practically perform. Regarding the Examiner's characterization of the claim as a "Mental Process," the Applicant contends that the claim recites specific computer-implemented functions that go far beyond human cognition. The Examiner suggests that identifying an item and its condition are mental processes. However, claim 1 requires the store server to "execute image recognition on the received image" and "execute a call to a machine-learning model . . . to determine a degree of discoloration or deformation." These are not mere mental steps, and they cannot be practically performed in the human mind. They involve specific, complex computational processes that analyze digital image data and execute a computer model to output a quantitative degree of condition. A human cannot practically process such pixel data and consistently calculate a precise "degree" of discoloration or deformation in the manner required by the claimed system.
In response to Applicant’s argument, the Examiner respectfully disagrees and notes based on the broadest reasonable interpretation of the claim language, a human could “identify an item shown in the image”, as well as, determine a discount amount, and calculate a selling price. The use of a physical aid (i.e., the pen and paper) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of this limitation. Therefore, the Examiner maintains the claims the claims are patent ineligible.
Applicant argues (in REMARKS, pages 8-9) that furthermore, the claim is not directed to "Managing Personal Behavior." The Examiner's assertion that steps like displaying and transmitting information are akin to managing human behavior overlooks the specific technological process recited. The claim requires that the portable terminal "generate a first screen showing the captured image" and, upon receiving data from the server, "update the first screen to further show, over the captured image" the price and interactive buttons. This describes a specific sequence of machine operations, including capturing an image, transmitting it to another machine (the server), receiving processed data back, and dynamically updating a graphical user interface. This is a technical interaction between system components, not an abstract instruction for managing human activity. Finally, while the system operates in a commercial context, the claim is not directed to a "Commercial Interaction" in the abstract. The features of amended claim 1, for example, "execute image recognition on the received image," "execute a call to a machine-learning model . . . to determine a degree of discoloration or deformation," "generate a first screen showing the captured image" and "update the first screen to further show, over the captured image," recite a specific technological configuration and operation, not the abstract business practice of discounting items.
In response to Applicant’s argument, the Examiner respectfully disagrees and notes that
the sub-groupings encompass both activity of a single person (for example, a person following a set of instructions or a person signing a contract online) and activity that involves multiple people (such as a commercial interaction), and thus, certain activity between a person and a computer may fall within the “certain methods of organizing human activity” grouping. Therefore, the examiner maintains the claims are directed to managing personal behavior, relationships or interactions between people and are not patent eligible. Finally, while the system operates in a commercial context, the claim is not directed to a "Commercial Interaction" in the abstract. The features of amended claim 1, for example, "execute image recognition on the received image," "execute a call to a machine-learning model . . . to determine a degree of discoloration or deformation," "generate a first screen showing the captured image" and "update the first screen to further show, over the captured image," recite a specific technological configuration and operation, not the abstract business practice of discounting items.
In response to Applicant’s arguments, the Examiner respectfully disagrees and notes that the claims as a whole recite certain methods of organizing human activity which includes the commercial interaction process of providing payment for an item at a discounted price based on the item’s discoloration or deformation. The use of image recognition, a first screen, and a ML model does not take the claim out of the method of organizing human activity (commercial interaction) grouping.
Applicant argues (in REMARKS, page 10) that even if, for the sake of argument, the amended claim were considered to touch upon an abstract idea, it is patent-eligible because it is integrated into a practical application. The claim recites an inventive concept by specifying a concrete, non-generic system that improves the functionality of POS technology. The amended claim is directed to a specific type of system, a point-of-sale (POS) system, and improves its functionality in a tangible way. A conventional POS system is a tool for price look-up and transaction processing based on static data from, for example, a barcode. The claimed features transform the POS system into a dynamic pricing tool that responds to the real-time, physical condition of an item, i.e., discoloration or deformation. This is achieved by the specific combination of a portable terminal with a camera, a server executing a specially trained machine-learning model for visual analysis, and a user interface that seamlessly integrates the dynamically generated price with registration and payment functions on the very screen showing the item's condition. This specific architecture provides a practical application that solves the technological problems of inconsistent manual discounting and inefficient inventory management for items with variable quality, thereby improving the functioning of the POS system itself. For at least the reasons stated above, amended claim 1 is not directed to an abstract idea but to a patent-eligible practical application of technology that improves the functioning of a POS system. The same reasons apply to amended claims 11 and 18, which recite features substantially similar to claim 1. Accordingly, Applicant respectfully requests the withdrawal of the rejection under 35 U.S.C. § 101.
In response to Applicant’s argument, the Examiner respectfully disagrees and notes that first, Applicant appears to be describing a business solution to a business problem of inconsistent manual discounting and inefficient inventory management for items with variable quality by use of a generic POS system to implement the abstract idea. Secondly, generally linking the use of the judicial exception to a particular technological environment or field of use does not integrate the judicial exception into practical application – see MPEP 2106.05(h). Further, the courts determined that "[p]atents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101" (Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025) (slip op. at 18)); and the courts also determined that "The requirements that the machine learning model be 'iteratively trained' or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement." Recentive Analytics, Inc. v. Fox. Corp., Fed Cir. No. 2023-2437 (Apr. 18, 2025), slip op. at 12." The Examiner suggests showing a teaching in the specification on how the invention improves a technology and establishing a clear nexus between the claim language and the improvement to technology where both the claims and the specification support the asserted technical improvement.
Applicant's arguments filed 11 August 2025 with respect to the rejection of claims 1, 4-6, 8-11, 14-18, and 20 under 35 U.S.C. 101 have been fully considered but they are not persuasive.
Applicant argues (in REMARKS, pages 10-11) that by this Response, claim 1 is amended to recite, inter alia,
"upon receipt of a selling price of an item from the store server, update the first screen to further show, over the captured image:
the selling price of the item,
a first button for registering the item, and
a second button for performing a payment process on one or more registered items, and when the second button is operated through the updated first screen, execute a payment process on the one or more registered items."
Amended claims 11 and 18 recite substantially similar features. Applicant respectfully submits that the above-recited features are not taught by the prior art.
Mochida teaches displaying a price over a captured image and a button for registering the item. However, Mochida is silent on showing a button for payment process together with the image and the received price.
Rogers teaches the method of determining a price adjustment based on the quality of produce assessed from an image. However, Rogers fails to teach any screen displayed on the portable device, which is required by the amended claims.
In view of the reasons stated above, the cited references fail to render claims 1, 11, and 18 obvious. By virtue of their dependence, claims 10 and 17 are also patentable for at least the same reasons. Accordingly, withdrawal of this rejection is respectfully requested.
In response to Applicant’s arguments, the Examiner respectfully notes that Mochida teaches a first button in [0072], also see FIG. 10; This screen 551 includes a product information display region 5041, a discount information display region 5042, a current price display region 5043, a registration button 5044, and a cancel button 5045); and ([0076] The registration button 5044 is a button for receiving an operation of registering, as a transaction target, the product specified by the individual item code read by the reading unit 501 of the controller 50), also see [0053]). Further, Ikezawa discloses a second button for performing a payment process on one or more registered items in [0134], FIG. 16 is an explanatory diagram illustrating a display example of a purchase product list. The display unit 1010 displays a screen d060 related to the received purchase product list. The screen d060 includes, for example, an outline field d061 of the purchase product list, a detail field d062, a product addition button d064, a payment button d065, a transaction cancel button d066, and a name display field d067 of the operator. and
Ikezawa further discloses when the second button is operated through the updated first screen, execute a payment process on the one or more registered items in [0138] In FIG. 16, when the payment button d065 is pressed, the store mobile terminal device 10 shifts the process to a settlement process. The settlement process will be described later; also see [0133]). .
Applicant argues (in REMARKS, pages 11-12) that Claims 5, 6,8,9, 15, and 16 Claims 2, 3, 5, 6, 8, 9, 12, 13, 15, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mochida in view of Rogers et al. in view of Hagen et al. (US PG Pub. 20230147769 Al); and as discussed above, Mochida and Rogers fail to teach the above-recited features of amended claims 1, 11, and 18. Hagen teaches displaying a payment button on a mobile device, but in a different context. Hagen discloses a "Pay & Go" button on a "Virtual Shopping Cart" screen, which is essentially a list or summary of items already added to the cart.
In contrast, the amended claims require displaying buttons over the first screen where an image of an individual item and its price are displayed upon receipt of the price. Hagen's "Pay & Go" button is for checking out the entire cart, not for acting on a single item immediately after its price is displayed over its image.
In view of the above, Mochida, Rogers, and Hagen, alone and in combination, fail to render claims 1, 11, and 18 obvious. Therefore, by virtue of their dependence, claims 5, 6, 8, 9, 15, and 16 are patentable for at least the same reasons. Accordingly, withdrawal of this rejection is respectfully requested.
In response to Applicant’s argument, the Examiner respectfully disagrees for reasons discussed above regarding Mochida disclosing a first button in [0072]; Ikezawa disclosing a second button for performing a payment process on one or more registered items in [0134]; and Ikezawa further disclosing when the second button is operated through the updated first screen, execute a payment process on the one or more registered items in [0138].
Applicant argues (in REMARKS, page 12) that Claims 4, 14, and 20 Claims 4, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mochida in view of Rogers et al. and Hagen et al. as applied to claims 2, 13, and 19 above, and in further view of Oguchi (JP 2022108422 A).
As discussed above, Mochida, Rogers, and Hagen fail to teach the above-recited features of amended claims 1, 11, and 18. Oguchi is alleged to teach aspects related to a discount based on the store's closing time. However, whatever Oguchi teaches in this regard, it is completely silent regarding "upon receipt of a selling price of an item from the store server, update the first screen to further show, over the captured image: the selling price of the item, a first button for registering the item, and a second button for performing a payment process on one or more registered items, and when the second button is operated through the updated first screen, execute a payment process on the one or more registered items," as required by the amended claims. In view of the reasons stated above, the cited references fail to render claims 1, 11, and 18 obvious. By virtue of their dependence, claims 4, 14, and 20 are patentable for at least the same reasons. Accordingly, withdrawal of this rejection is respectfully requested.
In response to Applicant’s argument, the Examiner respectfully disagrees for reasons discussed above regarding Mochida disclosing a first button in [0072]; Ikezawa disclosing a second button for performing a payment process on one or more registered items in [0134]; and Ikezawa further disclosing when the second button is operated through the updated first screen, execute a payment process on the one or more registered items in [0138].,
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-6, 8-11, 14-18, and 20 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea without significantly more.
Step 1
Claims 1, 4-6, and 8-10 are directed to a system (i.e., a machine); Claims 11 and 14-17 are directed to a store server (i.e., a machine); and Claims 18 and 20 are directed to a non-transitory computer readable medium (i.e., a manufacture). Therefore, Claims 1-20 all fall within the one of the four statutory categories of invention.
Step 2A Prong 1
Independent claim 1 substantially recites:
stores item information about an item; and
capture an image,
generate a first screen showing the captured image, and display the generated screen,
transmit the captured image, and upon receipt of a selling price of an item, update the first screen to further show over the captured image:
the selling price of the item,
a first button for registering the item,
a second button for performing a payment process on one or more registered items, and
when the second button is operated through the updated screen, execute a payment process on the one or more registered items, wherein
wherein, upon receipt of an image: execute image recognition on the received image and identify an item shown in the image, execute a call to a machine-learning model with the received image to determine a degree of discoloration or a deformation of the various items and information indicating degrees of discoloration or deformation of the various items,
determine a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item,
calculate a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item, and
transmit the selling price.
Independent claim 11 substantially recites:
stores item information including regular selling process of items sold at the store;
upon receipt of an image captured, execute image recognition on the image and identify an item shown in the image, execute a call to a machine-learning model with the image to determine a degree of discoloration or deformation of the item shown in the image,
the machine-learning model having been trained with a plurality of images of various items and information indicating degrees of discoloration or deformation of the various items,
determine a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item,
calculate a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item,
transmit the selling price to the portable terminal such that the selling price is displayed over the captured image on the portable terminal together with a first button for registering the item and a second button for performing a payment process on one or more registered items, and
acquire information on the one or more registered items on which the payment process was performed.
Independent claim 18 substantially recites:
capturing an image using a camera;
generating a first screen showing the captured image, and displaying the generated first screen;
transmitting the captured image to a store,
executing image recognition on the image transmitted to and received by the store and identifying an item shown in the image;
executing a call to a machine-learning model with the image to determine a degree of discoloration or deformation of the item shown in the image,
the machine-learning model having been trained with a plurality of images of various items and information indicating degrees of discoloration or deformation of the various items;
determining a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item;
calculating a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item;
transmitting the selling price from the store;
updating the first screen on the portable terminal to further show, over the captured image: the selling price,
a first button for registering the item, and
a second button for performing a payment process on one or more registered items; and
when the second button is operated through the updated first screen, executing a payment process on the one or more registered items.
As per Independent claim 1, the limitations as a whole recite a method of organizing human activity. The aforementioned limitations as drafted, are processes that, under their broadest reasonable interpretation, are directed towards:
stores item information about an item; and
capture an image,
generate a first screen showing the captured image, and display the generated screen,
transmit the captured image, and upon receipt of a selling price of an item, update the first screen to further show over the captured image:
the selling price of the item,
registering the item,
a second button for performing a payment process on one or more registered items, and
when the second button is operated through the updated screen, execute a payment process on the one or more registered items, wherein
wherein, upon receipt of an image: execute image recognition on the received image and identify an item shown in the image, execute a call to a machine-learning model with the received image to determine a degree of discoloration or a deformation of the various items and information indicating degrees of discoloration or deformation of the various items,
determine a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item,
calculate a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item, and
transmit the selling price, which may be interpreted as at least as a “Mental Process” (concepts performed in the human mind) which includes observations, evaluations, judgments, and opinions and/or “Managing Personal Behavior or Relationships or Interactions Between People” which includes social activities, teaching, and following rules or instructions and/or “Commercial Interactions” which includes agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations in claim 1. That is, nothing in the claim elements preclude the step from practically being performed by the human mind (identify, determine, calculate); managing personal behavior or relationships or interactions between people (stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit); and commercial interactions (stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit). For example, stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit in the context of claim 1, encompasses the user who store item information, captures an image, generate a screen, display the screen, transmit the captured image, update the first screen, registering the item, performing a payment, execute a payment process, execute image recognition and identify an item shown in the image, execute a call to a ML model, determine a discount amount, calculate a selling price, and transmit the selling price.
As per Independent claim 11, the limitations as a whole recite a method of organizing human activity. The aforementioned limitations as drafted, are processes that, under their broadest reasonable interpretation, are directed towards:
stores item information including regular selling process of items sold at the store;
upon receipt of an image captured, execute image recognition on the image and identify an item shown in the image, execute a call to a machine-learning model with the image to determine a degree of discoloration or deformation of the item shown in the image,
the machine-learning model having been trained with a plurality of images of various items and information indicating degrees of discoloration or deformation of the various items,
determine a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item,
calculate a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item,
transmit the selling price to the portable terminal such that the selling price is displayed over the captured image on the portable terminal together with a first button for registering the item and a second button for performing a payment process on one or more registered items, and
acquire information on the one or more registered items on which the payment process was performed, which may be interpreted as at least as a “Mental Process” (concepts performed in the human mind) which includes observations, evaluations, judgments, and opinions and/or “Managing Personal Behavior or Relationships or Interactions Between People” which includes social activities, teaching, and following rules or instructions and/or “Commercial Interactions” which includes agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations– but for the recitation of generic computer components in claim 11. That is, nothing in the claim elements preclude the step from practically being performed by the human mind (identify, determine, calculate); managing personal behavior or relationships or interactions between people (stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire); and commercial interactions (stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire). For example, stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire in the context of claim 11, encompasses the user who stores item information, execute image recognition and identify an item shown in the image, execute a call, determine a discount, calculate a selling price, transmit the selling price, registering the item, performing a payment, and acquire information.
As per Independent claim 18, the limitations as a whole recite a method of organizing human activity. The aforementioned limitations as drafted, are processes that, under their broadest reasonable interpretation, are directed towards:
capturing an image using a camera;
generating a first screen showing the captured image, and displaying the generated first screen;
transmitting the captured image to a store,
executing image recognition on the image transmitted to and received by the store and identifying an item shown in the image;
executing a call to a machine-learning model with the image to determine a degree of discoloration or deformation of the item shown in the image,
the machine-learning model having been trained with a plurality of images of various items and information indicating degrees of discoloration or deformation of the various items;
determining a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item;
calculating a selling price for the identified item by subtracting the discount amount from a regular selling price of the identified item;
transmitting the selling price from the store;
updating the first screen on the portable terminal to further show, over the captured image: the selling price,
a first button for registering the item, and
a second button for performing a payment process on one or more registered items; and
when the second button is operated through the updated first screen, executing a payment process on the one or more registered items, which may be interpreted as at least as a “Mental Process” (concepts performed in the human mind) which includes observations, evaluations, judgments, and opinions and/or “Managing Personal Behavior or Relationships or Interactions Between People” which includes social activities, teaching, and following rules or instructions and/or “Commercial Interactions” which includes agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations in claim 18. That is, nothing in the claim elements preclude the step from practically being performed by the human mind (identifying, determining, calculating); managing personal behavior or relationships or interactions between people (capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing); and commercial interactions (capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing). For example, capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing in the context of claim18, encompasses the user capturing an image, generating a first screen, displaying the first screen, transmitting the captured image, executing image recognition, executing a call, determining a discount, calculating a selling price, transmitting the selling price, updating the screen, registering the item, performing a payment, and executing a payment.
Step 2A Prong 2
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the additional elements (e.g. “a system,’ “a store server,” “a portable terminal,” “a camera,” “a display,” “a communication controller,” “a processor,” and “a machine learning model”); claim 11 recites the additional element (e.g. “a store server,” “a portable terminal,” “a communication controller,” “a memory,” “a processor,” and “a machine-learning model”); and claim 18 recites the additional element (e.g. “a non-transitory computer readable medium,” “a program,” “a program,” “a computer,” “a portable terminal,” “a camera,” “a store server,” and “a machine learning model”– using the processor or computer) to perform the stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit in claim 1, to perform the stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire in claim 11, and to perform the capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing in claim 18. The “processor” and/or “computer” in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit in claim 1, to perform the stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire in claim 11, and to perform the capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing in claim 18) such that it amounts no more than mere instructions to “apply” the exception using a generic computer component. That is, the aforementioned limitations merely invoke the generic components as a tool to perform the abstract idea, e.g. see MPEP 2106.05(f). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Further, in regards to the “communication controller” and the “machine-learning model” the “transmit,” and “transmit” limitations in claims 1, 11, and 18 are just mere data gathering, and also are characterized as transmitting or receiving data over a network and insignificant post-solution activity and are also recited at a high level or generality, and merely automates the receive and acquire steps.
Step 2B
Independent claims 1, 11, and 18, do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using the “ system,” “store server,” “portable terminal,” “camera,” “display,” “communication controller,” “processor,” and “machine learning model” in claim 1 to perform the stores, capture, generate, display, transmit, update, registering, performing, execute payment, execute and identify, execute, determine, calculate, and transmit steps; the additional element of using the “store server,” “portable terminal,” “communication controller,” “memory,” “processor,” and “machine-learning model” in claim 11 to perform the stores, execute and identify, execute, determine, calculate, transmit, registering, performing, acquire steps; and the additional element of using the “non-transitory computer readable medium,” “program,” “a computer,” “a portable terminal,” “a camera,” “a store server,” and “a machine learning model” in claim 18 to perform the capturing, generating, displaying, transmitting, executing and identifying, executing, determining, calculating, transmitting, updating, registering, performing, executing steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, even when viewed as a whole, nothing in the claims add significantly more (i.e. inventive concept) to the abstract idea. The claims are ineligible.
As per dependent claims 4 and 14, the recitation, “determines/determines the discount amount further based on a time when the image was captured… “ is further directed to a method of organizing human activity as described in claims 1 and 11, respectively. Therefore, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
As per dependent claims 5 and 15, the recitation, “determines/determines the discount amount further based on an expiration date of the identified item” is further directed to a method of organizing human activity and/or a mental process as described in claims 1 and 11, respectively. Therefore, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
As per dependent claims 6 and 16, the recitations, “transmit/transmit information indicating that a discount is applied because of discoloration or deformation of the identified item” and “display/display the information transmitted … to the selling price” are further directed to a method of organizing human activity as described in claims 1 and 11, respectively. Therefore, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
As per dependent claims 8, 9, and 20, the limitations merely narrow the previously recited abstract idea limitations. Dependent claim 8 recites the portable terminal is attachable to a shopping cart. Dependent claim 9 recites the portable terminal is attachable to a handle of the shopping cart. Dependent 20 recites discount amount is determined further based on a time when the image was captured…and a closing time of the store. For the reasons described above with respect to claims 8, 9, and 20, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
As per dependent claim 10, the recitations, “transmit… information indicating the item on which the payment process was performed” and “display information on a result of the payment process”; and are further directed to a method of organizing human activity as described in claim 1. Therefore, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
As per dependent claim 17, the recitation, “stores the information on one or more registered items on which the payment process was performed” is further directed to a method of organizing human activity as described in claim 11. Therefore, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Further, the recitation, “a sales file” is another computer components recited at a high-level of generality and is merely invoked as a tool to perform the abstract idea. Similar to claim 11, the recitation does not provide a practical application of the abstract idea, or significantly more than the abstract idea.
Dependent Claims 4-6, 8-10, 14-17, and 20 have been given the full two part analysis including analyzing the additional limitations both individually and in combination. Dependent Claims 4-6, 8-10, 14-17, and 20, when analyzed individually, and in combination, are also held to be patent ineligible under 35 U.S.C. 101. The dependent claims fail to establish that the claims do not recite an abstract idea because the additional recited limitations of the dependent claims merely further narrow the abstract idea of the independent claims. The dependent claims recite no additional elements that would integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Simply implementing the abstract idea on generic computer components is not a practical application of the judicial exception and does not amount to significantly more than the judicial exception. The claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5-6, 8-11, 15-17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mochida (US PG Pub. 2024/0104614 A1) in view of Rogers et al. (US PG Pub. 2022/0327685 A1) in further view of Ikezawa (US PG Pub. 2024/0046237 A1) and Hagen et al. (US PG Pub. 20230147769 A1).
As per claims 1, 11, and 18, Mochida discloses a point-of-sale (POS) system, store server, for setting and displaying a selling price of an item sold at a store, and registering the item for purchase, comprising:
a store server that stores item information about an item including regular selling price of items sold at the store (Mochida :[0065], see FIG. 12; the inquiry transmitting unit 502 transmits the individual item code output by the reading unit 501 to the server 1 to make an inquiry (request) for information regarding the product specified by the individual item code); and
a portable terminal connectable to the store server and including: (Mochida: [0030],[0033] The server 1 is an example of an information processing apparatus that performs processing relating to a transaction of a product and is installed in a store. The server 1 and the user terminal 5 are communicably connected to each other via the network 3)
a camera, a display, a communication controller, and a processor (Mochida: [0054] FIG. 5 is a diagram showing an example of a hardware configuration of the user terminal 5. As shown in FIG. 5, the user terminal 5 includes a CPU 51, a ROM 52, a RAM 53, a communication device 54, a display device 55, an operation device 56, an audio output device 57, a camera 58, and a storage device 59) configured to:
control the camera to capture an image, generate a first screen showing the captured image, and control the display to display the generated first screen, (Mochida: [0065],[0072][0083],[0088]),
control the communication controller to transmit the captured image to the store server [0084], and upon receipt of a selling price of an item from the store server, update the first screen to further show , over the captured image:
the selling price of the item (Mochida: [0092]) In this case, on the screen 553, the current price display region 5043 and the registration button 5044 are not displayed and a lowest price display label 5048 is superimposed and displayed on the image of the captured image display field 5047. The lowest price display label 5048 is an example of a mark image), (Mochida: [0055] The CPU 51 is an example of a processor and integrally controls the operation of the user terminal 5); and (Mochida: [0056][0059] In the controller 50, the CPU 51 operates in accordance with the program that is stored in the ROM 52 or the storage device 59 and developed into the RAM 53, thereby executing various types of processing. Such a controller 50 is connected to the respective units (the storage device 59, the communication device 54, the display device 55, the operation device 56, the audio output device 57, and the camera 58) via the bus), and
a first button for registering the item (Mochida: [0072], also see FIG. 10; This screen 551 includes a product information display region 5041, a discount information display region 5042, a current price display region 5043, a registration button 5044, and a cancel button 5045); and ([0076] The registration button 5044 is a button for receiving an operation of registering, as a transaction target, the product specified by the individual item code read by the reading unit 501 of the controller 50), also see [0053]), and
wherein the store server is configured to, upon receipt of an image from the portable terminal to: decode the received image (Mochida: [0065] The reading unit 501 decodes the code symbol 9 included in an image captured by the camera 58 to obtain an individual item code. The inquiry transmitting unit 502 transmits the individual item code output by the reading unit 501 to the server 1 to make an inquiry (request) for information regarding the product specified by the individual item code):
While Mochida discloses decoding (recognizing) the one or more code symbols from the received image in [0088], Mochida does not explicitly disclose, however, Rogers et al. discloses:
execute image recognition on the received image and identify an item shown in the image (Rogers et al.: [0036] As another example, a produce identifier in the scanned data, decoding the product identifier, and matching the decoded product identifier of the particular food item with the product identifier of the food item that is stored in the data store),
execute a call to a machine-learning model with the received image (Rogers et al.: Abstract; [0036] Moreover, the first and second sets of the trained models can be trained using convolution neural networks (CNNs). At least one of the image data and the scanned data can include color data, RGB color models, and hyperspectral data); [0106],[0152] For example, when evaluating the produce 112A-N (step A), the evaluation system 102 can receive the image data including histograms, RGB color models, hyperspectral data, multispectral data, etc. The image data can include images of a particular produce having some particular feature to be modeled, such as rot and desiccation)also see [0034],
to determine a degree of discoloration or deformation of the item shown in the image, the machine-learning model having been trained with a plurality of images of various items and information indicating degrees of discoloration or deformation of the various items, (Rogers et al.: Abstract; [0036] Moreover, the first and second sets of the trained models can be trained using convolution neural networks (CNNs). At least one of the image data and the scanned data can include color data, RGB color models, and hyperspectral data); [0106],[0152] For example, when evaluating the produce 112A-N (step A), the evaluation system 102 can receive the image data including histograms, RGB color models, hyperspectral data, multispectral data, etc. The image data can include images of a particular produce having some particular feature to be modeled, such as rot and desiccation)also see [0034],
determine a discount amount of the identified item depending on the determined degree of discoloration or deformation of the identified item (Rogers et al.: [0005] This document generally describes technology for determining and using produce metrics, such as quality metrics (e.g., ripeness metric, freshness metric, taste metrics) and pricing metrics (e.g., price of produce at current quality level) that may change as produce (e.g., fruits, vegetables) ripens or otherwise changes over time); and (Rogers et al: [0043] The pricing computing system can receive, from the quality assessment system, the current quality level of the particular food item, and retrieve, from a pricing models data store, a third set of trained models that can be trained to determine price adjustments of the particular food item based on the current quality level of the particular food item. Each of the trained models can be trained using at least (i) pricing data of other food items having poor quality levels and (ii) image data of the other food items having high quality levels. The pricing computing system can determine, based on applying the third set of trained models to the current quality level of the particular food item, a price adjustment for the particular food item, and transmit, to the user device, the price adjustment for display at the user device), also see [0044].
Rogers et al.: further discloses in [0159] The pricing system 202 can apply the model(s) (step K). A pricing model for the grocery store can, for example, be trained to reduce a price of the produce by a predetermined percentage once the produce is at peak ripeness. One or more pricing models can be trained to reduce price based on quality of the produce lowering… One or more pricing models can also be trained to implement a logarithmic decrease in price once past peak ripeness. One or more pricing models can be trained to plummet the price once the produce is no longer ripe).. [0161] As mentioned, the price adjustment can be an increase or decrease in price based on current and/or projected quality of the produce. The price can be reduced as quality of the produce is reduced, according to a curve that can describe price elasticity of the quality of the produce. The price adjustment can also include a discount, which can be applied at the current time t