DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant's election with traverse of Group I in the reply filed on 12/08/2025 is acknowledged. The Applicant’s arguments for traversal are not found persuasive for the following reasons.
On Pages 1-2, Applicant argues that the inventions are not independent or distinct because Group II has no practical utility without Group I. As explained in the Restriction Requirement, “subcombination II has separate utility such as machine learning model training to classify produce based on color histogram and ultraviolet or infrared notation.” While Applicant says that this is “illusory and unsupported” the Applicant fails to provide substantive argumentation that this would not be a separate utility of Group II. Applicant merely provides explanations from the Specification as to why Group II is useful in the context of Group I. First, note that while claims are interpreted light of the Specification, we do not import claim limitations from the Specification into the claims. Secondly, even if Group II has utility in combination with Group I, that does not mean that Group II does not have separate, additional utility.
Applicant additionally argues on Page 2 that Groups I and II are analogous to training wheels on a bicycle, wherein the training wheels have no separate utility apart from their use on the bicycle. Similarly to Groups I and II, training wheels and a bicycle are subcombinations useable together. They clearly do not overlap in scope and are not obvious variants. A bike is used for transportation, while training wheels are used for safety or learning/teaching. This analogy fails to prove the Applicant’s assertion.
Applicant additionally argues on Pages 2-3 that Group I necessarily requires Group II. Accordingly, the Applicant admits on the record that Group I is not enabled without Group II. In order to overcome an enablement rejection, Applicant is now required to amend Group II into claim 1. See MPEP 2172.01: "Depending on the specific facts at issue, a claim which omits subject matter disclosed to be essential to the invention as described in the specification or in other statements of record may be rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, as not enabling". Nevertheless, the Examiner disagrees with the Applicant’s assertion that Group I necessarily requires Group II. While Group I may work better in combination with Group II, it is not inoperable without Group II. For example, the MLM of Group I may be trained differently than by the training methods disclosed in Group II.
Next, Applicant argues on Page 3 that the inventions share a single, unified inventive concept. This Application is not a PCT or national stage entry of a PCT. Unity of Invention rules do not apply. As per Applicant's example "key cutting" and "lock opening" are in fact two different operations. Cutting a key requires going to a hardware store, putting the source key and a blank key into a machine and letting the machine work, while "lock opening" requires putting a key in to a lock and turning the key. It is not clear how these could be considered to have the same inventive concept.
Next, on Pages 3-4, Applicant argues that there is no serious search or examination burden. Applicant first argues that the inventions are in the same field of search, and that the prior art relevant to Group I would “necessarily encompass the training methodologies of Group II”. This is inaccurate for a several reasons. First, the Applicant assumes that there is only one way to train an MLM, and that another invention could not describe a training strategy that is different from the strategy of Group II. This is clearly not the case, as MLMs can be trained in a plurality of ways. Second, many inventions focus on a method of training an MLM without thoroughly discussing implementation. Many inventions focus on implementing a trained MLM without thoroughly discussing the training. Therefore, the Applicant’s assertion that all inventions that use an MLM necessarily discuss both training and implementation of MLMs in depth, is untrue. Applicant further argues that Groups I and II would encompass the same search strategy, and no additional examination time would be required. For the reasons recited above, neither of these assertions are true. Searching for MLM training methods requires a distinct and divergent approach compared to searching for using a trained MLM to achieve a goal.
With respect to Applicant’s arguments on pages 5-9, the Examiner has reviewed Applicant's numerous arguments, which do not show any error in the Examiner's restriction.
The requirement is still deemed proper and is therefore made FINAL.
Claim Objections
Claim 6 is objected to because of the following informalities: Claim 6 should recite “wherein” before “receiving”. Appropriate correction is required.
Claim 9 is objected to because of the following informalities: Claim 9 should recite “marker” instead of “maker”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-10, and 19-20 are rejected under 35 U.S.C. 101
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of determining produce characteristics and modifying workflow, without significantly more.
The claim recites: “receiving at least one image depicting at least one produce item placed on a scale of a terminal during a transaction;
providing the at least one image to a machine learning model (MLM) as input;
receiving at least one produce-related characteristic for the at least one produce item as output from the MILM; and
causing a workflow for the transaction to be modified based on the at least one produce- related characteristic.”
The limitations, as drafted, are processes that, under their broadest reasonable interpretation, cover performance of the limitation in the mind. A person can look at an image, identify a produce-related characteristic, and change a workflow based on the characteristic. For example, the claim describes a standard process performed by a cashier, but for the recitation of image receipt and the MLM. The image receipt amounts to insignificant, extra-solution activity (data collection). The MLM is discussed in the next Paragraph.
This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a machine learning model, scale and terminal. These are is recited at a level of generality such that amounts to no more than a generic learning model, a generic scale, and a generic terminal. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at a high-level of generality. It is therefore a judicial exception that is not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This claim is not patent eligible.
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of saving/storing metrics over iterations of the method, which can be done mentally. The claim is not patent eligible.
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of saving/storing metrics at terminals with terminal identifiers, which can be done mentally and amounts to a minor modification of the generic terminal additional element. The claim is not patent eligible.
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of receiving multiple images from a camera. This amounts to a minor modification to the insignificant extra-solution activity (data collection), and the addition of a generic camera that fails to integrate the abstract idea into a practical application. The claim is not patent eligible.
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of starting the method of claim 1 when a PLU is identified, which amounts to a mental process. A cashier can begin the process of claim 1 after identifying the PLU code. The claim is not patent eligible.
Claims 7-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of identifying if a produce item is bagged, determining if the produce item is contained with a specialized bag for organic produce, and identifying if the produce item has an organic marker. These can all be performed mentally/visually. The claims are not patent eligible.
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of performing a basic calculation without subtracting bag weight when it is determined that the produce item is not in a bag, and identifying if the produce item has an organic marker. These amount to simple mental processes. The claim is not patent eligible.
Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea analogous to the limitations of claim 1, 6, and 10. Accordingly, the reasons for the 35 USC 101 rejection of claims 1, 6, and 10 apply here. Claim 19 also incorporates the abstract idea of subtracting a bag weight when a produce item is bagged, which amounts to a basic mental process. Claim 19 further recites a generic server with a processor and non-transitory computer-readable storage medium, which amount to generic computer components that fail to incorporate the abstract idea into a practical application. The claim is not patent eligible.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of describing the generic terminal as a generic self-service terminal or cashier operated terminal. These amount to generic shopping terminals that fail to incorporate the abstract idea into a practical application. The claim is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5 and 7 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sampson (US20220198218A1).
Regarding claim 1, Sampson teaches “A method, comprising: receiving at least one image depicting at least one produce item placed on a scale of a terminal during a transaction; providing the at least one image to a machine learning model (MLM) as input; ” (Sampson, Paragraphs 8 and 37, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”; “In some implementations, the method 200 can also include executing an image recognition model to read an indication of information about the produce such as the weight from the image and generate a representation of information about the produce such as the weight. The image recognition model can be a neural network model that can receive an image with a first portion and a second portion. For example, the image can show a produce at the first portion and a scale displaying weight of the produce at the second portion. The image recognition model can use the second portion to generate the indication of weight from the image. The first compute device and/or the second compute device can then calculate an adjusted weight (e.g., the weight of the produce minus the weight of the bag) based on the representation of weight and the predicted bag type indicator. In some instances, the first compute device and/or the second compute device can calculate a price based on the adjusted weight, the predicted category indicator of the image, and/or the predicted organic type indicator of the image.”)
“receiving at least one produce-related characteristic for the at least one produce item as output from the MLM; and causing a workflow for the transaction to be modified based on the at least one produce- related characteristic.” (Sampson, Paragraphs 8 and 37, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”; “In some implementations, the method 200 can also include executing an image recognition model to read an indication of information about the produce such as the weight from the image and generate a representation of information about the produce such as the weight. The image recognition model can be a neural network model that can receive an image with a first portion and a second portion. For example, the image can show a produce at the first portion and a scale displaying weight of the produce at the second portion. The image recognition model can use the second portion to generate the indication of weight from the image. The first compute device and/or the second compute device can then calculate an adjusted weight (e.g., the weight of the produce minus the weight of the bag) based on the representation of weight and the predicted bag type indicator. In some instances, the first compute device and/or the second compute device can calculate a price based on the adjusted weight, the predicted category indicator of the image, and/or the predicted organic type indicator of the image.” Accordingly, the bag determination is mapped to the characteristic, and the transaction modification is mapped to the weight adjustment (product weight minus bag weight).)
Regarding claim 2, Sampson teaches “The method of claim 1,”
“further comprising: receiving feedback data from the terminal indicating that the at least one produce-related characteristic was incorrect; and flagging the at least one image with the feedback data for continuous training of the MLM.” (Sampson, Paragraph 36, “In some embodiments, the method 200 can optionally include detecting an error in the predicted category indicator of the image, the predicted organic type indicator of the image, or the predicted bag type indicator of the image. For example, in some instances, the error can be detected and reported by a user of the first compute device and/or the second compute device. The first compute device and/or the second compute device can be configured to receive a corrected category indicator, a corrected organic type indicator, and/or a corrected bag type indicator. The first compute device and/or the second compute device can further train and refine the trained machine learning model at least based on the image, the error in the predicted category indicator, the predicted organic type indicator, and/or the predicted bag type indicator, the corrected category indicator, the corrected organic type indicator, and/or the corrected bag type indicator.”)
Regarding claim 3, Sampson teaches “The method of claim 1,”
“maintaining, during each iteration of the method, metrics associated with the at least one produce-related characteristic of the at least one produce item and other metrics for other produce- related characteristics of other produce items.” (Sampson, Paragraph 37, “In some implementations, the method 200 can also include executing an image recognition model to read an indication of information about the produce such as the weight from the image and generate a representation of information about the produce such as the weight. The image recognition model can be a neural network model that can receive an image with a first portion and a second portion. For example, the image can show a produce at the first portion and a scale displaying weight of the produce at the second portion. The image recognition model can use the second portion to generate the indication of weight from the image. The first compute device and/or the second compute device can then calculate an adjusted weight (e.g., the weight of the produce minus the weight of the bag) based on the representation of weight and the predicted bag type indicator. In some instances, the first compute device and/or the second compute device can calculate a price based on the adjusted weight, the predicted category indicator of the image, and/or the predicted organic type indicator of the image.” Note that this demonstrates that bag weight (metric associated with a produce characteristic of a produce item) is stored over iterations. Additionally, the use of category and organic types for price calculation requires that there are monetary values (metrics) associated with other produce characteristics of produce items stored over iterations. For example, each produce type has an associated price/pound.)
Regarding claim 4, Sampson teaches “The method of claim 3,”
“further comprising: maintaining the metrics and the other metrics by store based on terminal identifiers linked to the terminal and other terminals associated with each iteration of the method.” (A store with self-checkout terminals, as taught by Sampson, would inherently have the same metrics stored at terminals throughout the store; otherwise, prices would vary from one terminal to another.)
Regarding claim 5, Sampson teaches “The method of claim 1,”
“wherein receiving the at least one image further includes receiving two or more images captured of the produce item by a single camera or by multiple cameras associated with the terminal.” (Sampson, Paragraphs 10 and 12 “While the methods and apparatus are described herein as processing data from a set of files, a set of images, a set of videos, a set of databases, and/or the like, in some instances a recognition device (e.g., recognition device 101 discussed below in connection with FIG. 1) can be used to generate the set of files, the set of images, the set of videos, a set of text, a set of numbers, and/or the set of databases. Therefore, the recognition device can be used to process and/or generate any collection or stream of data. As an example, the recognition device can process and/or generate any string(s), number(s), image(s), video(s), executable file(s), dataset(s), and/or the like.” Note that ‘image(s)’ indicates capability of receiving multiple images, and a video is a collection of multiple images (frames).; “The recognition device 101 includes a memory 102, a communication interface 103, and a processor 104. In some embodiments, the recognition device 101 can receive data including a set of images, a set of text data, and a set of numerical data, from a data source(s). The data source(s) can be or include, for example, an external hard drive (not shown) operatively coupled to the recognition device 101, the compute device 160, the server 170, and/or the like. In some instances, the recognition device 101 can receive a set of videos from the data source(s) and analyze the set of videos frame by frame to generate the set of images of produce. In some embodiments, the recognition device 101 can optionally include a camera 108 that captures the set of images. In addition, the recognition device 101 can include a set of peripheral devices (e.g., a keyboard, a text-to-speech device, and/or the like; not shown) to record the set of text data or the set of numerical data.”)
Regarding claim 7, Sampson teaches “The method of claim 1,”
“wherein receiving the at least one produce-related characteristic further includes receiving the at least one produce-related characteristic as an indication from the MLM that the at least one produce item is contained within a bag on the scale.” (Sampson, Paragraph 8, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sampson in view of Tsirulnik (US 20210125166 A1).
Regarding claim 6, Sampson teaches “The method of claim 1,”
While Sampson teaches receiving images (see rejection of claim 1), Sampson does not expressly disclose performing the image receiving when an operator selects a PLU code for an item.
Tsirulnik discloses collecting an image when an operator selects a PLU code for an item (Tsirulnik, Paragraph 76, “In one embodiment, a method includes identifying a retail item placed on the scale in order to compare the identified retail item to a retail item selected during the standard retail item look-up process such as performed on a user interface terminal. This method provides improved loss prevention over the improper selection of a retail item during this lookup process. In operation, when a shopper places a retail item on the scanner/scale and the shopper chooses to look-up the retail item such as via a user interface terminal or to enter the price look-up (PLU) code, the method includes acquiring images of the retail item placed on the scale from two cameras to recognize the retail items. Further, the method includes determining those predicted retail items that are above a predefined confidence level and then comparing those predicted retail items with the retail item that was selected or entered through the look-up process. If the retail item selected is on the list of these predicted retail items, then the process continues as normal. If not, then an intervention is triggered. This method performs, among other things, the use case when a shopper places a retail item on the scanner/scale but selects a retail code for a less expensive retail item or a different retail item.”)
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to perform the image collection of Sampson when an operator selects a PLU code, as disclosed by Tsirulnik.
The motivation for doing so would have been to indicate to the terminal that it should collect images/videos at that time. This is more computationally efficient than constantly imaging the terminal until the produce item appears. Additionally, the PLU code is needed for determination of price, which is the function of the register. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sampson with the above teaching of Tsirulnik to fully disclose, “receiving the at least one image further includes receiving the at least one image when an operator of the terminal selects or enters a price lookup (PLU) code for the at least one produce item during the transaction.”
Regarding claim 19, Sampson teaches “A system, comprising: at least one server comprising at least one processor and a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium comprising executable instructions, that when executed by at least one processor cause the at least one processor to perform operations,” (Sampson, Figure 1 and Paragraph 27, “The server 170 can be/include a compute device medium particularly suitable for data storage purpose and/or data processing purpose and can include, for example, a network of electronic memories, a network of magnetic memories, a server(s), a blade server(s), a storage area network(s), a network attached storage(s), deep learning computing servers, deep learning storage servers, and/or the like. The server 170 can include a memory 172, a communication interface 173 and/or a processor 174 that are structurally and/or functionally similar to the memory 102, the communication interface 103 and/or the processor 104 as shown and described with respect to the recognition device 101. In some implementations, however, the memory 172 can include application specific storage (e.g., deep learning storage servers) that is structurally and/or functionally different from the memory 102. Similarly, in some implementations, the processor 174 can include application-specific processors (e.g., GPU rack servers) that are structurally and/or functionally different from the memory 102.”)
While Sampson teaches receiving images of a produce item placed on a scale of the terminal (see rejection of claim 1), Sampson does not expressly disclose the selection of a PLU code at a terminal.
Tsirulnik discloses collecting an image of a retail item when an operator selects a PLU code for the item (Tsirulnik, Paragraph 76, “In one embodiment, a method includes identifying a retail item placed on the scale in order to compare the identified retail item to a retail item selected during the standard retail item look-up process such as performed on a user interface terminal. This method provides improved loss prevention over the improper selection of a retail item during this lookup process. In operation, when a shopper places a retail item on the scanner/scale and the shopper chooses to look-up the retail item such as via a user interface terminal or to enter the price look-up (PLU) code, the method includes acquiring images of the retail item placed on the scale from two cameras to recognize the retail items. Further, the method includes determining those predicted retail items that are above a predefined confidence level and then comparing those predicted retail items with the retail item that was selected or entered through the look-up process. If the retail item selected is on the list of these predicted retail items, then the process continues as normal. If not, then an intervention is triggered. This method performs, among other things, the use case when a shopper places a retail item on the scanner/scale but selects a retail code for a less expensive retail item or a different retail item.”)
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to perform the image collection of Sampson when an operator selects a PLU code, as disclosed by Tsirulnik.
The motivation for doing so would have been to indicate to the terminal that it should collect images/videos at that time. This is more computationally efficient than constantly imaging the terminal until the produce item appears. Additionally, the PLU code is needed for determination of price, which is the function of the register. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sampson with the above teaching of Tsirulnik to fully disclose, “comprising: receiving, from at least one image of at least one produce item placed on a scale of the terminal during a transaction in which a price lookup (PLU) code for the produce item was entered or selected at the terminal;”
Sampson in view of Tsirulnik further disclose “providing the at least one image to a machine learning model (MLM) as input; receiving at least one produce-related characteristic for the at least one produce item as output from the MLM;” (see claim 1 rejection of analogous limitations.)
“based on the at least one produce-related characteristic, instructing the terminal to one or more of: determine a price for the at least one produce item by subtracting a bag tare weight from a produce weight provided by the scale for the at least one produce item when the at least one produce-related characteristic indicates a bagged classification for the at least one produce item or determine a price for the at least one produce item by not subtracting a bag tare weight from a produce weight provided by the scale for the at least one produce item when the at least one produce-related characteristic indicates an unbagged classification for the at least one produce item;” (Sampson, Paragraph 37, “In some implementations, the method 200 can also include executing an image recognition model to read an indication of information about the produce such as the weight from the image and generate a representation of information about the produce such as the weight. The image recognition model can be a neural network model that can receive an image with a first portion and a second portion. For example, the image can show a produce at the first portion and a scale displaying weight of the produce at the second portion. The image recognition model can use the second portion to generate the indication of weight from the image. The first compute device and/or the second compute device can then calculate an adjusted weight (e.g., the weight of the produce minus the weight of the bag) based on the representation of weight and the predicted bag type indicator. In some instances, the first compute device and/or the second compute device can calculate a price based on the adjusted weight, the predicted category indicator of the image, and/or the predicted organic type indicator of the image.” Note that subtracting the bag weight when the bag is detected implies not subtracting the bag weight when the bag is not detected. Additionally, note that while the claim recites “one or more of” the above limitations, Sampson discloses both.)
“or preselect an organic produce type for the at least one produce item within an interface presented to an operator at the terminal when the at least one produce-related characteristic indicates a specialized produce bag classification or an organic marker classification.” (As indicated above, the claim recites “one or more of”, therefore this limitation is moot in view of the previous limitations being taught by Sampson.)
Regarding claim 20, Sampson in view of Tsirulnik teaches “The method of claim 19,”
“wherein the terminal is a self-service terminal operated by a customer during the transaction or the transaction terminal is a point-of-sale terminal operated by a cashier during the transaction.” (Sampson, Paragraph 8, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”)
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sampson in view of OFFICIAL NOTICE.
Regarding claim 8, Sampson teaches “The method of claim 7,”
While Sampson determines a bag type characteristic and determining organic type, (Sampson, Paragraph 8, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”) Sampson does not expressly disclose “associating the indication with a specialized bag for organic produce”.
The Examiner takes OFFICIAL NOTICE that implementation of different colored bags for organic and non-organic produce is a well-known commercial standard across a wide variety of grocery stores (for example, clear bags for non-organic, and colored, often green, bags for organic).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to include organic bag type, taught by OFFICIAL NOTICE, as one of the bag types detected by Sampson.
The motivation for doing so would have been to improve accuracy of organic produce determination by considering an organic bag type. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sampson with the above teaching from OFFICIAL NOTICE to fully disclose, “wherein receiving the at least one produce-related characteristic further includes associating the indication with a specialized bag for organic produce.”
Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sampson in view of Sampson2 (US20230106190A1).
Regarding claim 9, Sampson teaches “The method of claim 1,”
While Sampson discloses “wherein receiving the at least one produce-related characteristic further includes receiving the at least one produce-related characteristic as an indication from the MLM that the at least one produce item” is organic, (Sampson, Paragraph 8, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”), Sampson does not expressly disclose the identification of “an organic marker associated with organic produce”.
Sampson2 discloses the identification of “an organic marker associated with organic produce” (Sampson2, Paragraph 14, “The image simulation device 101 includes a memory 102, a communication interface 103, and a processor 104. In some implementations, the image simulation device 101 can receive data including the first set of images of the first type (e.g., images of produce, images of meat, images of pastry, and/or the like) and/or the second set of images of the second type (e.g., images of organic type markings, images of price tags, images of barcodes, images of expiry dates, and/or the like) from a data source(s). In some examples disclosed herein, the second set of images of the second type include images of organic type markings. Such images of organic type markings can include images of organic types written in any natural language (e.g., English, Chinese, Hindi, and/or the like), images of organic labels encoding a pattern (e.g., a bar code) that represents organic types, images of organic label having a design that represents organic types, and/or the like. The data source(s) can be or include, for example, an external hard drive (not shown), the compute device 160, the server 170, and/or the like, operatively coupled to the image simulation device 101. In some instances, the image simulation device 101 can receive a set of videos from the data source(s) and analyze the set of videos frame by frame to generate the first set of images and/or the second set of images.”)
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the image-based identification of organic markers taught by Sampson2 in the image-based organic type determination of Sampson.
The motivation for doing so would have been to improve detection accuracy of organic produce. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sampson with the above teaching of Sampson2 to fully disclose, “wherein receiving the at least one produce-related characteristic further includes receiving the at least one produce-related characteristic as an indication from the MLM that the at least one produce item includes an organic maker associated with organic produce.”
Regarding claim 10, Sampson teaches “The method of claim 1,”
“wherein causing further includes one or more of instructing the terminal to calculate a price for the at least one produce item without subtracting a bag tare weight from a corresponding produce weight provided by the scale when the at least one produce-related characteristic is a classification of the at least one produce item as not being contained within a bag;” (Sampson, Paragraph 37, “In some implementations, the method 200 can also include executing an image recognition model to read an indication of information about the produce such as the weight from the image and generate a representation of information about the produce such as the weight. The image recognition model can be a neural network model that can receive an image with a first portion and a second portion. For example, the image can show a produce at the first portion and a scale displaying weight of the produce at the second portion. The image recognition model can use the second portion to generate the indication of weight from the image. The first compute device and/or the second compute device can then calculate an adjusted weight (e.g., the weight of the produce minus the weight of the bag) based on the representation of weight and the predicted bag type indicator. In some instances, the first compute device and/or the second compute device can calculate a price based on the adjusted weight, the predicted category indicator of the image, and/or the predicted organic type indicator of the image.” Note that subtracting the bag weight when the bag is detected implies not subtracting the bag weight when the bag is not detected.)
While Sampson instructs the terminal (self-checkout) to select organic produce type when the imaged produce item is identified as organic, (Sampson, Paragraphs, “Described herein are recognition devices and methods that are suitable for highly reliable recognition of category, bag type, and/or organic type of images of produce. In particular, recognition devices and methods described herein can be implemented in a self-checkout retail device to quickly and efficiently generate a category indicator, a bag type indicator, and/or an organic type indicator for an image of produce. Thereafter, the self-checkout retail device can use the category indicator, the bag type indicator, and/or the organic type indicator of a produce placed to obtain an accurate estimate of weight and cost of the produce.”), Sampson does not expressly disclose the identification of “an organic marker for the at least one produce item.”
Sampson2 discloses identification of “an organic marker for the at least one produce item.” (Sampson2, Paragraph 14, “The image simulation device 101 includes a memory 102, a communication interface 103, and a processor 104. In some implementations, the image simulation device 101 can receive data including the first set of images of the first type (e.g., images of produce, images of meat, images of pastry, and/or the like) and/or the second set of images of the second type (e.g., images of organic type markings, images of price tags, images of barcodes, images of expiry dates, and/or the like) from a data source(s). In some examples disclosed herein, the second set of images of the second type include images of organic type markings. Such images of organic type markings can include images of organic types written in any natural language (e.g., English, Chinese, Hindi, and/or the like), images of organic labels encoding a pattern (e.g., a bar code) that represents organic types, images of organic label having a design that represents organic types, and/or the like. The data source(s) can be or include, for example, an external hard drive (not shown), the compute device 160, the server 170, and/or the like, operatively coupled to the image simulation device 101. In some instances, the image simulation device 101 can receive a set of videos from the data source(s) and analyze the set of videos frame by frame to generate the first set of images and/or the second set of images.”)
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use the image-based identification of organic markers taught by Sampson2 in the image-based organic type determination of Sampson.
The motivation for doing so would have been to improve detection accuracy of organic produce. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Sampson with the above teaching of Sampson2 to fully disclose, “and instructing the terminal to pre-select an organic produce type for the at least one produce item when the at least one produce-related characteristic is a classification of the at least one produce item as being an organic produce item based on detection of an organic marker for the at least one produce item.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Glaser (US 20220270063 A1) teaches produce and bulk good monitoring using computer vision, including price identification based on item identity and weight.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON JOSEPH SORRIN whose telephone number is (703)756-1565. The examiner can normally be reached Monday - Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON JOSEPH SORRIN/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672