DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to application No. 18/699637, filed on 4/9/2024. Claims 1-19 are currently pending and have been examined. Claims 5,-9, 15, 16, 18, and 19 are amended. Claims 1-19 have been rejected as follows.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: Claim 1: A method for logically linking an electronic display unit (5a-5n) to a product (8a-8e) by means of a linking device (12), […] generating image data, which represent an optical depiction of the product (8a-8e) and the display unit (5a-5n), by means of the image detecting device (9) and transmitting these image data to the linking device (12), by means of which the logical linking can be established, generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted […]
Claim 3: by means of a mechanically actuatable contact,
Claim 10: and the trigger for data processing initiates the start of generating the image data by means of the image detecting device
Claim 12: and the trigger for data processing initiates embedding meta data in the image data by means of the image detecting device
Claim 18: wherein a coded light signal is sent out by means of the light emitting device (7), by means of which identification of the sending device is made possible, in particular a display unit
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1-19 are a method. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1-4 and 9-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claim 1 recites:
A method for logically linking an electronic display unit (5a-5n) to a product (8a-8e) by means of a linking device (12), wherein the display unit (5a-5n) and the product (8a-8e) are located on a mounting structure (2) in the detection region of an image detecting device (9), wherein the method comprises the method steps below, namely:
generating image data, which represent an optical depiction of the product (8a-8e) and the display unit (5a-5n), by means of the image detecting device (9) and transmitting these image data to the linking device (12), by means of which the logical linking can be established,
generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted, wherein the trigger signal indicates the presence of an unlinked display unit (5a-5n) in the detection region and
logically linking the as yet unlinked electronic display unit (5a-5n) to the product (8a-8e) in accordance with the occurrence of the trigger signal.
These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for linking the identification of an electronic display unit (ESL-electronic shelf labels) to an identified product in a retail store (page 2 lines 1-15, at least, of the instant specification). The steps under its broadest reasonable interpretation specifically fall under sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of
A method for logically linking an electronic display unit (5a-5n) to a product (8a-8e) by means of a linking device (12), wherein the display unit (5a-5n) and the product (8a-8e) are located on a mounting structure (2) in the detection region of an image detecting device (9), wherein the method comprises the method steps below, namely:
generating image data, which represent an optical depiction of the product (8a-8e) and the display unit (5a-5n), by means of the image detecting device (9) and transmitting these image data to the linking device (12), by means of which the logical linking can be established,
generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted, wherein the trigger signal indicates the presence of an unlinked display unit (5a-5n) in the detection region and
logically linking the as yet unlinked electronic display unit (5a-5n) to the product (8a-8e) in accordance with the occurrence of the trigger signal.
The additional elements emphasized above are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f).
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component.
Even when considered as an ordered combination, the additional elements of claim 1 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
As such, independent claim 1 is ineligible.
Dependent claims 2-4 and 9-15 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claim 1 without significantly more.
Claim 2 recites wherein the trigger stage is arranged to detect the presence of an unlinked electronic display unit (5a-5n) on the mounting structure (2) in an automatic manner and/or as a consequence of a user interaction. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The addition of the electronic display unit on a mounting structure and the optional automatic arrangement is recited at a high level of generality.
Claim 3 recites wherein for the purpose of detecting in an automatic manner when the electronic display unit (5a-5n) is attached on the mounting structure (2), an electronic circuit is influenced in accordance with one of the measures listed below, namely: by means of a mechanically actuatable contact, in a capacitive manner, in an inductive manner, by loading an electrical power supply, by the occurrence of an electronic power supply, by signalling on an electronic bus system. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The addition of the electronic display unit on a mounting structure, electronic circuits and the optional automatic arrangement is recited at a high level of generality.
Claim 4 recites wherein for the purpose of detection as a consequence of a user interaction a change of state of an electronic user interface is determined. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The addition of the electronic user interface is recited at a high level of generality and as merely receiving input from a user.
Claim 9 recites wherein a trigger signal receiving stage (10) is provided, with which the trigger signal is received and receiving of the trigger signal is used as the trigger for data processing. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The claim recites the additional element of the trigger signal receiving stage, however is recited at a high level of generality.
Claim 10 recites wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates the start of generating the image data by means of the image detecting device (9). The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The claim recites the additional element of the trigger signal receiving stage and the image detecting device, however is recited at a high level of generality.
Claim 11 recites wherein the start of transmitting the image data and their reception at the linking device (12) starts the logical linking process at the linking device (12). The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
12. (Original) The method according to claim 9, wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates embedding meta data in the image data by means of the image detecting device (9). The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The claim recites the additional element of the trigger signal receiving stage and the image detecting device, however is recited at a high level of generality.
Claim 13 recites wherein the occurrence of the meta data in the image data received at the linking device (12) starts the logical linking process at the linking device (12). The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 14 recites wherein the trigger signal receiving stage (10) controls the linking device (12) directly and the trigger for data processing initiates a start of the logical linking process at the linking device (12). The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application. The claim recites the additional element of the trigger signal receiving stage and the image detecting device, however is recited at a high level of generality.
Claim 15 recites wherein the linking device (12) looks for the unlinked electronic display unit (5a-5n) in the generated image data and identifies it and - looks in the generated image data for a product (8a-8e) positioned correspondingly to the position of the found and identified unlinked electronic display unit (5a-5n) which is as yet not logically linked to the electronic display unit (5a-5n) and identifies it, and - for the purpose of logically linking the identified electronic display unit (5a-5n) to the identified product (8a-8e) generates logical linking data and stores them digitally. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claims 5-8 and 16-19 are found to be eligible and not rejected under 35 USC 101 as they do not recite an abstract idea or further limit the abstract idea of claim 1. The claims are directed to the arrangement of the trigger stage, the electronic power supply unit and the shelf in claims 5-8. Claims 16-19 are directed to the arrangement of the light emitting device and does not recite an abstract idea nor does it further limit the abstract idea of claim 1. The recitation of the “signals” in claims 5-8 and 16-19 are determined to be descriptive of the structural arrangement and are not part of the abstract idea. For at least these reasons claims 5-8 and 16-19 are eligible under 35 USC 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 9-15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over YOSHIHIRO (US 20170293959) in view of BELLOWS (US 20210209550).
Regarding claim 1, Yoshihiro discloses:
A method for logically linking an electronic display unit (5a-5n) to a product (8a-8e) by means of a linking device (12), wherein the display unit (5a-5n) and the product (8a-8e) are located on a mounting structure (2) in the detection region of an image detecting device (9), wherein the method comprises the method steps below, namely: [0007] An information processing apparatus provided by the present invention includes: a product recognition unit that recognizes a product from an image on which the product and an electronic shelf label are imaged; a shelf label recognition unit that extracts a shelf label ID of the electronic shelf label from the image, and recognizes a position of the electronic shelf label; and a determination unit that determines whether or not association of a closest product within the image, and product information and the shelf label ID of the electronic shelf label matches association of product information and a shelf label ID of relation information acquired by associating product information of the product and the shelf label ID based on the recognized product and shelf label ID.
generating image data, which represent an optical depiction of the product (8a-8e) and the display unit (5a-5n), by means of the image detecting device (9) und transmitting these image data to the linking device (12), by means of which the logical linking can be established, [Steps disclosed in 0058-0082; paragraphs of note include [0060] For example, the information processing apparatus 2000 acquires the target image stored within this camera. In a case where an external storage apparatus stores the target image generated by the camera, the information processing apparatus 2000 may acquired the target image from this storage apparatus. The camera may be provided such that the generated target image is stored in a storage apparatus provided within the information processing apparatus 2000. In this case, the information processing apparatus 2000 acquires the target image from a storage unit provided within the information processing apparatus 2000. [0064] The product recognition unit 2020 analyzes a target image 10, and recognizes the product. Here, a technique for recognizing an object such as the product pictured on an image is a known technique, and various known techniques may be used for recognizing the object. Hereinafter, an example of a process performed by the product recognition unit 2020 will be described. [0069] Specifically, the product recognition unit 2020 extracts an area representing the product (product area) from the target image 10. The product recognition unit 2020 acquires the product information the feature value 208 of which indicates the feature value equal or similar to the feature value of the extracted product area. The product recognition unit 2020 recognizes that the product represented by the extracted product area is the product identified by the product ID indicated by the acquired product information. The product recognition unit 2020 determines the product ID of the product represented by each product area by performing the aforementioned process for each product area]
generating a trigger signal […] , wherein the trigger signal indicates the presence of an unlinked display unit (5a-5n) in the detection region and [0134] The information processing apparatus 2000 of Exemplary Embodiment 2 includes a change unit 2080. The change unit 2080 changes the association indicated by the relation information. Specifically, in a case where it is determined that “the shelf label ID associated with the product information of the recognized product in the relation information does not match the shelf label ID of the electronic shelf label closest to the recognized product”, the change unit 2080 changes the shelf label ID associated with the product information in the relation information to the shelf label ID of the electronic shelf label 3000 closest to the product 40 recognized from the target image; [0138] After step S112 is performed, the process of FIG. 15 is branched at step S302. Specifically, in a case where it is determined as “matching” in step S112 (S302: YES), the process of FIG. 15 is ended. On the other hand, in a case where it is determined as “non-matching” in step S112 (S302: NO), the process of FIG. 15 proceeds to step S304. In step S304, the change unit 2080 changes the shelf label ID indicated by the relation information of the target determined as “non-matching” in step S112 to the shelf label ID of the electronic shelf label 3000 closest to the product 40 of the product information indicated by the relation information (S304). Note: the triggering signal corresponds to the determination of a "does not match" result; when it is determined that a product is located in the vicinity of a label and the information on the label does not correspond to that which fits the product, then an action is triggered; this is the aim of the "change unit" and the "transmission unit" in paragraphs 134, 141, 142 and 175, 176; "unlinked" is referred to as "not matched"
logically linking the as yet unlinked electronic display unit (5a-5n) to the product (8a-8e) in accordance with the occurrence of the trigger signal. [0141] The information processing apparatus 2000 of Exemplary Embodiment 4 includes a product information transmission unit 2100. The product information transmission unit 2100 transmits the product information to the electronic shelf label 3000 determined by the shelf label ID indicated by the relation information the association of which is changed. Here, the product information transmitted from the product information transmission unit 2100 to the electronic shelf label 3000 is the product information associated with the electronic shelf label 3000 in the relation information the association of which is changed by the change unit 2080. See also [134, 141, 142 and also 175, 176].
While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted
However Bellows teaches:
generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted [0028] In some embodiments, the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data. More particularly, the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. In this way, the system 100 may know 1) whether or not the captured image is good based on how useful it is for the object recognition engine, that may be stored on the server 112, and 2) from the sensor 106, where the associated merchandise was located on the shelf 102 when the image was captured. If it is determined that images are more useful when the merchandise is positioned in a specific location, then the system 100 could adjust the thresholds that trigger the camera 104 to capture an image, improving future performance. The adjustments of the triggers and thresholds may be managed by software located at the server 112, camera 104, and/or sensor 106. [0014 and 15, 20 and 28 and 30 and 31].
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include generating a trigger signal by means of a trigger stage which is assigned to the mounting structure (2), on which the display unit (5a-5n) is mounted, as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 2, Yoshihiro in view of Bellows teaches the limitations set forth above. Yoshihiro further discloses:
wherein the trigger stage is arranged to detect the presence of an unlinked electronic display unit (5a-5n) on the mounting structure (2) in an automatic manner and/or as a consequence of a user interaction. [0119] There are various timings when the information processing apparatus 2000 performs the series of processes. For example, the information processing apparatus 2000 periodically performs the processes. In this case, for example, a date and time or a cycle in which the process is performed is set for the information processing apparatus 2000 in advance. For example, the information processing apparatus 2000 may perform the process when the operation of the salesperson is received.
Regarding claim 3, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein for the purpose of detecting in an automatic manner when the electronic display unit (5a-5n) is attached on the mounting structure (2), an electronic circuit is influenced in accordance with one of the measures listed below, namely: by means of a mechanically actuatable contact, in a capacitive manner, in an inductive manner, by loading an electrical power supply, by the occurrence of an electronic power supply,
by signalling on an electronic bus system.
However Bellows teaches:
wherein for the purpose of detecting in an automatic manner when the electronic display unit (5a-5n) is attached on the mounting structure (2), an electronic circuit is influenced in accordance with one of the measures listed below, namely: by means of a mechanically actuatable contact, in a capacitive manner, in an inductive manner, by loading an electrical power supply, by the occurrence of an electronic power supply, by signalling on an electronic bus system. [0028] the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. and see [0031]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein for the purpose of detecting in an automatic manner when the electronic display unit (5a-5n) is attached on the mounting structure (2), an electronic circuit is influenced in accordance with one of the measures listed below, namely: by means of a mechanically actuatable contact, in a capacitive manner, in an inductive manner, by loading an electrical power supply, by the occurrence of an electronic power supply, by signalling on an electronic bus system, as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 4, Yoshihiro in view of Bellows teaches the limitations set forth above. Yoshihiro further discloses:
wherein for the purpose of detection as a consequence of a user interaction a change of state of an electronic user interface is determined. [0119] For example, the information processing apparatus 2000 may perform the process when the operation of the salesperson is received. [0053] The information processing apparatus 2000 is realized by various types of computers, such as a portable terminal, a personal computer (PC), or a server. Here, the information processing apparatus 2000 may be realized by a dedicated computer for implementing the information processing apparatus 2000, or may be realized by a general-purpose computer that operates other applications. [0054] FIG. 3 is a block diagram illustrating a hardware configuration of a computer 5000 that realizes the information processing apparatus 2000. The computer 5000 includes a bus 5020, a processor 5040, a memory 5060, a storage 5080, and an input and output interface 5100. and see [0055] and see [0132]
Regarding claim 9, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein a trigger signal receiving stage (10) is provided, with which the trigger signal is received and receiving of the trigger signal is used as the trigger for data processing.
However Bellows teaches:
wherein a trigger signal receiving stage (10) is provided, with which the trigger signal is received and receiving of the trigger signal is used as the trigger for data processing. [0028] In some embodiments, the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data. More particularly, the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. In this way, the system 100 may know 1) whether or not the captured image is good based on how useful it is for the object recognition engine, that may be stored on the server 112, and 2) from the sensor 106, where the associated merchandise was located on the shelf 102 when the image was captured. If it is determined that images are more useful when the merchandise is positioned in a specific location, then the system 100 could adjust the thresholds that trigger the camera 104 to capture an image, improving future performance. The adjustments of the triggers and thresholds may be managed by software located at the server 112, camera 104, and/or sensor 106. and see [0031]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein a trigger signal receiving stage (10) is provided, with which the trigger signal is received and receiving of the trigger signal is used as the trigger for data processing, as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 10, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates the start of generating the image data by means of the image detecting device (9).
However Bellows teaches:
wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates the start of generating the image data by means of the image detecting device (9). [0028] In some embodiments, the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data. More particularly, the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. In this way, the system 100 may know 1) whether or not the captured image is good based on how useful it is for the object recognition engine, that may be stored on the server 112, and 2) from the sensor 106, where the associated merchandise was located on the shelf 102 when the image was captured. If it is determined that images are more useful when the merchandise is positioned in a specific location, then the system 100 could adjust the thresholds that trigger the camera 104 to capture an image, improving future performance. The adjustments of the triggers and thresholds may be managed by software located at the server 112, camera 104, and/or sensor 106. and see [0031]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates the start of generating the image data by means of the image detecting device (9), as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 11, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the start of transmitting the image data and their reception at the linking device (12) starts the logical linking process at the linking device (12).
However Bellows teaches:
wherein the start of transmitting the image data and their reception at the linking device (12) starts the logical linking process at the linking device (12). [0028] the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. and see [0031] [0046] In some embodiments of the method 400, identifying the at least one product identifier (block 408) further comprises identifying, at the server, in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers such as those listed herein may be used and may be able to be identified by the server. [0047] The method 400 may further comprise generating or updating, at the at least one server, a planogram for the shelf based on the at least one identified product identifier. Similarly, the method 400 may further comprise generating or updating, at the at least one server, the information displayed on at least one electronic shelf label for the shelf based on the at least one identified product identifier.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the start of transmitting the image data and their reception at the linking device (12) starts the logical linking process at the linking device (12), as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 12, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates embedding meta data in the image data by means of the image detecting device (9).
However Bellows teaches:
wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates embedding meta data in the image data by means of the image detecting device (9). [0026] For example, the images could be provided by each product's supplier, by the retailer, or by a third party. The images may be taken manually or there may be an automated setup that captures multiple images for each product and are uploaded to the database. Additionally, the image data stored in the libraries may be collected via point of sale solutions that capture images of the products, along with UPC (barcode) data and/or EPC (RFID) data. For example, an in-lane or bioptic scanner may include a camera that records an image of the product while the barcode is read. To identify the at least one product identifier, the server 112 may be further configured to identify in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers may be used for products and identifiable by the server 112, such as, for example, brand names for products, the product class, product dimensions, as well as colors, shapes, outlines, features, patterns, and anything else in the product's appearance that visually distinguishes it. In some embodiments, the camera 104 may be capable of performing the identification itself without communication with the server 112, such that the camera 104 may be capable of performing all the same functionality listed above with respect to the server 112 in reference to performing the identification. [0027] The camera 104 may work in conjunction with the server 112 to identify the best types of images, and the best conditions under which to take an image. This identification process may improve over time through the use of artificial intelligence and/or machine learning techniques to train the camera 104 to take better pictures. The training of the camera 104 may also include integrating the data from sensor 106 into the library of information that the camera 104, or other program, may be trained on. [0028] In some embodiments, the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the trigger signal receiving stage (10) controls the image detecting device (9) and the trigger for data processing initiates embedding meta data in the image data by means of the image detecting device (9), as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 13, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the occurrence of the meta data in the image data received at the linking device (12) starts the logical linking process at the linking device (12).
However Bellows teaches:
wherein the occurrence of the meta data in the image data received at the linking device (12) starts the logical linking process at the linking device (12). [0026] For example, the images could be provided by each product's supplier, by the retailer, or by a third party. The images may be taken manually or there may be an automated setup that captures multiple images for each product and are uploaded to the database. Additionally, the image data stored in the libraries may be collected via point of sale solutions that capture images of the products, along with UPC (barcode) data and/or EPC (RFID) data. For example, an in-lane or bioptic scanner may include a camera that records an image of the product while the barcode is read. To identify the at least one product identifier, the server 112 may be further configured to identify in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers may be used for products and identifiable by the server 112, such as, for example, brand names for products, the product class, product dimensions, as well as colors, shapes, outlines, features, patterns, and anything else in the product's appearance that visually distinguishes it. In some embodiments, the camera 104 may be capable of performing the identification itself without communication with the server 112, such that the camera 104 may be capable of performing all the same functionality listed above with respect to the server 112 in reference to performing the identification. [0027] The camera 104 may work in conjunction with the server 112 to identify the best types of images, and the best conditions under which to take an image. This identification process may improve over time through the use of artificial intelligence and/or machine learning techniques to train the camera 104 to take better pictures. The training of the camera 104 may also include integrating the data from sensor 106 into the library of information that the camera 104, or other program, may be trained on. [0028] In some embodiments, the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the occurrence of the meta data in the image data received at the linking device (12) starts the logical linking process at the linking device (12), as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 14, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the trigger signal receiving stage (10) controls the linking device (12) directly and the trigger for data processing initiates a start of the logical linking process at the linking device (12).
However Bellows teaches:
wherein the trigger signal receiving stage (10) controls the linking device (12) directly and the trigger for data processing initiates a start of the logical linking process at the linking device (12). [0028] the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106. and see [0031] [0046] In some embodiments of the method 400, identifying the at least one product identifier (block 408) further comprises identifying, at the server, in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers such as those listed herein may be used and may be able to be identified by the server. [0047] The method 400 may further comprise generating or updating, at the at least one server, a planogram for the shelf based on the at least one identified product identifier. Similarly, the method 400 may further comprise generating or updating, at the at least one server, the information displayed on at least one electronic shelf label for the shelf based on the at least one identified product identifier.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the trigger signal receiving stage (10) controls the linking device (12) directly and the trigger for data processing initiates a start of the logical linking process at the linking device (12), as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Regarding claim 15, Yoshihiro in view of Bellows teaches the limitations set forth above. Yoshihiro further discloses:
wherein the linking device (12) looks for the unlinked electronic display unit (5a-5n) in the generated image data and identifies it and - looks in the generated image data for a product (8a-8e) positioned correspondingly to the position of the found and identified unlinked electronic display unit (5a-5n) which is as yet not logically linked to the electronic display unit (5a-5n) and identifies it, [0134] The information processing apparatus 2000 of Exemplary Embodiment 2 includes a change unit 2080. The change unit 2080 changes the association indicated by the relation information. Specifically, in a case where it is determined that “the shelf label ID associated with the product information of the recognized product in the relation information does not match the shelf label ID of the electronic shelf label closest to the recognized product”, the change unit 2080 changes the shelf label ID associated with the product information in the relation information to the shelf label ID of the electronic shelf label 3000 closest to the product 40 recognized from the target image; [0138] After step S112 is performed, the process of FIG. 15 is branched at step S302. Specifically, in a case where it is determined as “matching” in step S112 (S302: YES), the process of FIG. 15 is ended. On the other hand, in a case where it is determined as “non-matching” in step S112 (S302: NO), the process of FIG. 15 proceeds to step S304. In step S304, the change unit 2080 changes the shelf label ID indicated by the relation information of the target determined as “non-matching” in step S112 to the shelf label ID of the electronic shelf label 3000 closest to the product 40 of the product information indicated by the relation information (S304).
Note: the triggering signal corresponds to the determination of a "does not match" result; when it is determined that a product is located in the vicinity of a label and the information on the label does not correspond to that which fits the product, then an action is triggered; this is the aim of the "change unit" and the "transmission unit" in paragraphs 134, 141, 142 and 175, 176; "unlinked" is referred to as "not matched"
and - for the purpose of logically linking the identified electronic display unit (5a-5n) to the identified product (8a-8e) generates logical linking data and stores them digitally. [0093] FIG. 8 is a diagram illustrating the relation information in a table format. The table of FIG. 8 is described as a relation information table 300. The relation information table 300 includes four columns of a shelf label ID 302, a product ID 304, a product name 306, and a price 308. The shelf label ID 302 is the shelf label ID of the electronic shelf label 3000. The product ID 304, the product name 306, and the price 308 are equivalent to the product ID 202, the product name 204, and the price 206 of the product information table 200.[0094] For example, the relation information is stored in a relation information storage unit provided inside or outside the information processing apparatus 2000.
Regarding claim 19, Yoshihiro in view of Bellows teaches the limitations set forth above. While Yoshihiro teaches the association of product information to electronic shelf labels of a shelf, the reference does not expressly disclose:
wherein the image detecting device (9) is designed for focussing or zooming, in particular in a digital manner.
However Bellows teaches:
wherein the image detecting device (9) is designed for focussing or zooming, in particular in a digital manner. Focusing and zooming are common properties of cameras known from every day use, for example also the camera in [0017] For example, in one embodiment, rather than taking a photo at an arbitrary time and then deciding if the image is useful (i.e. is there an object present, is it in focus, and is it at an appropriate distance?), the output of the smart shelf sensors can be used to determine if an object is appropriately positioned for a photo before exercising the camera. The camera would be focused only on the front inventory position near the shelf's edge, thereby simplifying the camera requirements and the object recognition processing as well as setting up clear bounds for a useful image. When the front position is empty, the system can continue to refer to the most recent image taken, and then once the position is occupied, the camera can be allowed to take another image. Since a successful implementation will require limiting power and communications, it is important that neither energy nor time are wasted on capturing and analyzing images that ultimately serve no purpose. This embodiment therefore serves as a valuable filter.
Note: The language "the image detecting device (9) is designed for focussing or zooming, in particular in a digital manner" is interpreted as intended use or intended function and note positively recited in the claims; thereby given little patentable weight.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels in Yoshihiro to include wherein the image detecting device (9) is designed for focussing or zooming, in particular in a digital manner, as taught in Bellows, in order to improve future performance of the capturing process and result in more useful images (paragraph 0028).
Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over YOSHIHIRO (US 20170293959) in view of BELLOWS (US 20210209550) in further view of DE BRUJN (US 20180293593).
Regarding claim 5, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein the electronic display unit (5a-5n) comprises the trigger stage and generating the trigger signal is performed by the electronic display unit (5a-5n) itself.
However De Brujn teaches:
wherein the electronic display unit (5a-5n) comprises the trigger stage and generating the trigger signal is performed by the electronic display unit (5a-5n) itself. [0085] In various embodiments, it is assumed that each ESL is capable of displaying its own unique identifier, the ESL ID, e.g. after receiving a wirelessly broadcasted command to do so. The ESL ID could for instance be displayed as alphanumerical text or as a standardized identity pattern.[0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein the electronic display unit (5a-5n) comprises the trigger stage and generating the trigger signal is performed by the electronic display unit (5a-5n) itself, as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 6, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein an electronic power supply unit (13a - 13d), which is formed on a shelf rail for electronically and/or communication-technically supplying at least one electronic display unit (5a-5n), comprises the trigger stage and wherein generating the trigger signal is performed by the electronic supply unit (13a - 13d).
However De Brujn teaches:
wherein an electronic power supply unit (13a - 13d), which is formed on a shelf rail for electronically and/or communication-technically supplying at least one electronic display unit (5a-5n), comprises the trigger stage and wherein generating the trigger signal is performed by the electronic supply unit (13a - 13d). [0085] In various embodiments, it is assumed that each ESL is capable of displaying its own unique identifier, the ESL ID, e.g. after receiving a wirelessly broadcasted command to do so. The ESL ID could for instance be displayed as alphanumerical text or as a standardized identity pattern.[0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein an electronic power supply unit (13a - 13d), which is formed on a shelf rail for electronically and/or communication-technically supplying at least one electronic display unit (5a-5n), comprises the trigger stage and wherein generating the trigger signal is performed by the electronic supply unit (13a - 13d), as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 7, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein the trigger stage comprises a radio signal transmitting stage (16) and a radio signal, in particular a Bluetooth-Low-Energy radio signal, is sent as the trigger signal by the radio signal transmitting stage.
However De Brujn teaches:
wherein the trigger stage comprises a radio signal transmitting stage (16) and a radio signal, in particular a Bluetooth-Low-Energy radio signal, is sent as the trigger signal by the radio signal transmitting stage.[0073] A quickly emerging radio technology is Bluetooth low energy (BLE)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein the trigger stage comprises a radio signal transmitting stage (16) and a radio signal, in particular a Bluetooth-Low-Energy radio signal, is sent as the trigger signal by the radio signal transmitting stage, as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 8, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein the trigger stage comprises a light signal emitting stage (7) and a light signal, in particular a LED light signal, is sent as the trigger signal by the light signal emitting stage (7).
However De Brujn teaches:
wherein the trigger stage comprises a light signal emitting stage (7) and a light signal, in particular a LED light signal, is sent as the trigger signal by the light signal emitting stage (7). [0085] In various embodiments, it is assumed that each ESL is capable of displaying its own unique identifier, the ESL ID, e.g. after receiving a wirelessly broadcasted command to do so. The ESL ID could for instance be displayed as alphanumerical text or as a standardized identity pattern.[0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein the trigger stage comprises a light signal emitting stage (7) and a light signal, in particular a LED light signal, is sent as the trigger signal by the light signal emitting stage (7), as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 16, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein each display unit (5a-5n) comprises its own light emitting device (7), which is activated at the occurrence of the trigger signal for emitting a light signal.
However De Brujn teaches:
wherein each display unit (5a-5n) comprises its own light emitting device (7), which is activated at the occurrence of the trigger signal for emitting a light signal. [0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein each display unit (5a-5n) comprises its own light emitting device (7), which is activated at the occurrence of the trigger signal for emitting a light signal, as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 17, Yoshihiro in view of Bellows teaches the limitations set forth above. While the combination teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references do not expressly disclose:
wherein only one light emitting device (7) is provided for each power supply device, the light emitting device being activated at the occurrence of the trigger signal for emitting a light signal.
However De Brujn teaches:
wherein only one light emitting device (7) is provided for each power supply device, the light emitting device being activated at the occurrence of the trigger signal for emitting a light signal. [0085] In various embodiments, it is assumed that each ESL is capable of displaying its own unique identifier, the ESL ID, e.g. after receiving a wirelessly broadcasted command to do so. The ESL ID could for instance be displayed as alphanumerical text or as a standardized identity pattern.[0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein only one light emitting device (7) is provided for each power supply device, the light emitting device being activated at the occurrence of the trigger signal for emitting a light signal, as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Regarding claim 18, Yoshihiro in view of Bellows in further view of DeBrujn teaches the limitations set forth above. While the combination of Yoshihiro in view of Bellows teaches the association of product information to electronic shelf labels of a shelf through the use of triggered cameras, the references of Yoshihiro in view of Bellows do not expressly disclose:
wherein a coded light signal is sent out by means of the light emitting device (7), by means of which identification of the sending device is made possible, in particular for a display unit (5a-5n) the light signal is coded in such a way that the code contained in it corresponds to that code, which for an unlinked display unit (5a-5n) is displayed on its screen.
However De Brujn teaches:
wherein a coded light signal is sent out by means of the light emitting device (7), by means of which identification of the sending device is made possible, in particular for a display unit (5a-5n) the light signal is coded in such a way that the code contained in it corresponds to that code, which for an unlinked display unit (5a-5n) is displayed on its screen. [0085] In various embodiments, it is assumed that each ESL is capable of displaying its own unique identifier, the ESL ID, e.g. after receiving a wirelessly broadcasted command to do so. The ESL ID could for instance be displayed as alphanumerical text or as a standardized identity pattern.[0086] Alternatively if the ESL is provided with an illumination device; such as a LED then this LED may be used to emit the ESL ID using a simple low-cost red LED by modulating its light output. Although according to the strict definition above such modulated emission would not qualify as coded light per se; as it does not have an illumination function a similar modulation may be used as that for the illumination light. As a result it would be possible to re-use the software used for detecting the coded light from the illumination lighting to also capture the ESL-ID as modulated in the LED output. And see [0088]
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the association of the product labels to shelf labels using camera functions in Yoshihiro in view of Bellows to include wherein a coded light signal is sent out by means of the light emitting device (7), by means of which identification of the sending device is made possible, in particular for a display unit (5a-5n) the light signal is coded in such a way that the code contained in it corresponds to that code, which for an unlinked display unit (5a-5n) is displayed on its screen, as taught in De Brujn, in order to improve the accuracy of the information provided at the shelf (paragraph 008).
Relevant Art Not Cited
US 12118506 discloses an automated inventory monitoring system includes an image capture module able to create an image of an aisle of a retail store. Images of products labels are identified in the image and classified as shelf labels or peg labels. For shelf labels, an area of the shelf is defined and associated with the shelf label. Images of products are identified in the image and products on the shelf within an area associated with a shelf label are associated with the shelf label. Products located below a peg label are associated with the peg label. Based on the association between labels and products, out-of-stock products, plugs and spread may be detected and reported to the staff of the retail store.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTORIA E. FRUNZI
Primary Examiner
Art Unit TC 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689
2/4/2026