DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is in reply to communication filed on 10/02/2025.
Claims 1, 5 and 9 have been amended.
Claims 1-12 are currently pending and have been examined.
Response to Arguments
In response to Applicant Arguments /Remarks made in an amendment filled on 10/02/2025:
1) Applicant argument submitted in pages 11-12 regards the 35 USC § 101 claim rejection.
Applicant's arguments have been fully considered but they are not persuasive.
Prong 1: Applicant states the claims “the claims are patent-eligible” because they are “directed to a specific improvement in computer functionality, not an abstract idea”. However, the claims expressly recite managing personal behavior or relationships or interactions between people (inventory management - identifying items in a retails environment) and mental process (analyzing item’s collected data, i.e., images/video, and storing data), which considered abstract concepts under the MPEP categories, see MPEP 2106.04. They also recite generic information processing steps (receive-identify-store), which the Federal Circuit has repeatedly found abstract when claimed at a results-oriented level (Electric Power Group; Content Extraction).
Prong 2: Applicant asserts integration into a practical application based on “identify merchandise from low-resolution video.” Merely applying an abstract workflow on a computer, even in real time, does not integrate the exception absent a specific improvement to technology or computer functioning. The ordered sequence is a conventional pipeline (capture-identify-store) and does not constitute an improvement to the underlying technology (contrast McRO, Enfish, Thales, Visual Memory).
Step 2B: Applicant argues the elements are “significantly more” because they “a specific implementation that improves upon prior art computer-based systems” Without any claimed technical mechanism or unconventional arrangement, this is a result-oriented limitation implemented on conventional devices. Courts have rejected similar arguments where the claimed improvement is merely using standard components to process and present information (Alice; Electric Power Group; Yu v. Apple, 1 F.4th 1040 (Fed. Cir. 2021) (claims to video/image- item recognition using conventional components and result-oriented processing ineligible)). Applicant has not pointed to claim language reciting non-conventional elements or provided evidence of non-conventionality as required under Berkheimer.
In sum, Applicant’s arguments do not persuasively show that the claims avoid reciting a judicial exception, integrate the exception into a practical application, or add significantly more.
2) Applicant argument submitted in pages 12-13 regards the 35 USC § 103 claim rejection.
In response, the examiner respectfully first emphasizes that the newly amended limitations are narrower in scope than the features previously presented in claims (1, 5 and 9). Applicant's arguments with respect to the amended limitations has been considered, however the argument is primarily raised in support of the amendments to independent claims 1, 5 and 9, and therefore is believed to be fully addressed via the new ground of rejection under §103 set forth below, which incorporates a new reference, Glaser et al. (US 20200193507 A1, hereinafter “Glaser”) in view of Adato et al. (US 20190213523 A1, hereinafter “Adato”) to teach the new limitations of claims 1, 5 and 9. Accordingly, the amendment and supporting arguments are believed to be fully addressed via the new ground of rejection set forth under §103 below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1:
Claims 1-4 recite a non-transitory computer-readable recoding medium, which is directed to a manufacture.
Claims 5-8 recite a method, which is directed to a process.
Claims 9-12 recite an apparatus, which is directed to a machine.
Therefore, each claim falls within one of the four statutory categories.
Step 2A, Prong 1 (Is a judicial exception recited?):
The independent claims 1, 5 and 9 recite the abstract idea of analyzing purchasing/customer behavior, see [0004]. This idea is described by the steps of
identifying, from inputs there to a first video of a resolution insufficient to identify a merchandise item by image matching of an first area including a shelf among shelves where merchandise in a store is placed, an action of a specific user picking up a piece of merchandise from the shelf, the specific user being one of plural persons, wherein identification information of the shelf and person feature information of the specific user are stored in a storage in association with each other every time when the action is performed, the person feature information being obtained;
identifying, from a second video of [[an]] a second area including an accounting machine in the store, each of the specific user and the accounting machine, based on the stored person feature information;
storing information on the specific user and information on the accounting machine in association with each other;
receiving a purchase history transmitted that has been associated with the specific user;
identifying a merchandise item included in the purchase history; and
identifying, by solving for an unknown association between each shelf and a corresponding merchandise item, the merchandise item to be associated with the shelf, such that a difference between a predicted value and an observed value that is the number of pieces on the merchandise item included in the purchase history is minimized, the predicted value being an addition of a number of pieces of merchandise picked up by the specific user from each of the shelves.
These claims recite a certain method of organizing human activity. The claims recite to a certain method of organizing human activity as the above abstract idea limitations are directed to managing personal behavior or relationships or interactions between people. The examiner finds the claims to simply recites steps of following rules or instructions to analyze purchasing/customer behavior by collecting data, and identifying results. The Examiner additionally finds the claims to be similar to an example the courts have identified as being a certain method of organizing human activity:
i. filtering content, BASCOM Global Internet v. AT&T Mobility, LLC, 827 F.3d 1341, 1345-46, 119 USPQ2d 1236, 1239 (Fed. Cir. 2016) (finding that filtering content was an abstract idea under step 2A, but reversing an invalidity judgment of ineligibility due to an inadequate step 2B analysis).
ii. considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018).
Offending clauses include:
“storing information on the specific user and information on the accounting machine in association with each other”
The claims recite a mental process. Before computers one could mentally or a human using paper and pen to identify information (such as, a customer action, customer information), store associated information, receive data to determine results (such as, merchandise location, specific customer). The Examiner finds the recited claims to be similar to a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016), which the courts have also found to recite a mental process.
Offending clauses include:
“storing information on the specific user and information on the accounting machine in association with each other”
Step 2A, Prong 2 (Is the exception integrated into a practical application?):
This judicial exception is not integrated into a practical application because the claims satisfy the following criteria, which indicate that the claims do not integrate the abstract idea into practical application:
The claimed additional limitations are:
Claim 1: a non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process, first and second trained models, a storage, the accounting machine,
Claim 5: first and second models, a storage, the accounting machine, a processor,
Claim 9: a memory; and a processor coupled to the memory, first and second trained models, a storage, the accounting machine,
The additional limitations are directed to using a generic computer to process information and perform the abstract idea. Therefore, the limitations merely amount to adding the words “apply it” (or an equivalent) to the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Claims 1, 5 and 9: Additional recitations of “storing information on the specific user and information on the accounting machine in association with each other” is insignificant extra-solution activity (data gathering/output) that does not meaningfully limit use of the abstract idea (Mayo; OIP Techs.; MPEP 2106.05(g)).
Step 2B (Does the claim recite additional elements that amount to significantly more that the judicial exception?):
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As for Step 2B analysis, knowing the consideration is overlapping with Step 2A, Prong 2. The Step 2B considerations have already been substantially addressed under Step 2A Prong 2, see Step 2A Prong 2 analysis above. As discussed above, the additional imitations amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Additional recitations of “storing information on the specific user and information on the accounting machine in association with each other” . With respect to the insignificant extra solution activity recited in the claims, these elements are similar to at least the following concepts determined by the courts to be insignificant extra solution activity that does not amount to significantly more than the abstract idea:
a) Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). See MPEP 2106.05(d)
b) Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011);
c) Consulting and updating an activity log, Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754. See MPEP 2106.05(g).
In addition, the dependent claims recite:
Step 2A, Prong 1 (Is a judicial exception recited?):
Dependent claims 2-4, 6-8 and 10-12 recitations further narrowing the abstract idea recited in the independent claims 1, 5 and 9 and therefore directed towards the same abstract idea.
Step 2A, Prong 2 and Step 2B:
The dependent claims 2-4, 6-8 and 10-12 further narrow the abstract idea recited in the independent claims 1, 5 and 9 and are therefore directed towards the same abstract idea.
The dependent claims recite the following additional limitations:
Claims 2-4: The non-transitory computer-readable recording medium,
Claims 10-12: processor,
However, the examiner finds each of these additional elements to be directed to merely “apply it” or applying a generic technology to perform the recited abstract idea of analyzing purchasing/customer behavior, the recitation to the generic computer technology that is being used as a tool to execute the steps that define the abstract idea do not provide for integration at the 2nd prong and do not provide for significantly more at step 2B.
Claims 6-8 Nothing additional is claimed for consideration at the 2nd prong or step 2B.
Therefore, the limitations on the invention of claims 1-12, when viewed individually and in ordered combination are directed to in-eligible subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-5, 7-9 and 11-12 are rejected under 35 U.S.C 103 as being unpatentable over Omer et al. (US 2020/0184270 A1, hereinafter “Omer”) in view of Glaser et al. (US 20200193507 A1, hereinafter “Glaser”) further in view of Adato et al. (US 20190213523 A1, hereinafter “Adato”).
Regarding claims 5, and similarly 1 and 9. Omer discloses an information processing method comprising:
identifying, from a first trained model (Omer, [0034]; “one or more of the supervised ML models comprise a neural network”) that inputs thereto a first video to identify a merchandise item by image matching of [[an]] a first area including a shelf among shelves where merchandise in a store is placed (Omer, [0107]; “The training dataset generation system 202 may communicate via the I/O interface 210 with one or more imaging sensors 230, for example, a camera, a video camera and/or the like deployed to monitor the store 200”), an action of a specific user picking up a piece of merchandise from the shelf, the specific user being one of plural persons (Omer, [0149]; “at 402, the process 400 starts with the items identifier 522 receiving a plurality of images captured by one or more of the imaging sensors 230 deployed to monitor the store 200 … one or more of the images may depict one or more items offered for sale in the store 200 which one or more of the customers 204 interacts with, for example, an item picked up by a customer 204”, see [0065-0066]), wherein identification information of the shelf (Omer, [0067]; “identifier(s) read from item(s)”) and person feature information of the specific user (Omer, [0067] and [0037]; “tracked customers”) are stored in a storage in association with each other every time when the action is performed (Omer, [0067]; “The tracked customers may be associated with checkout events based on the identification of the tracked customers at the POS area(s) coupled with a timestamp assigned to the identifier(s) read by the POS reader during the respective checkout events. As such the identifier(s) read from item(s) in one or more of the checkout events may be correlated with the items picked up by the respective tracked customers”), the person feature information being obtained by a second trained model (Omer, [0037]; “one or more of the items picked up by one or more of the tracked customers are identified in the store using one or more detection algorithms applied to at least some of the plurality of images to identify one or more of the tracked customers and one or more interactions of the tracked customer(s) with the item(s)”, so called “second trained model” is considered no more than a predictable variation or duplication of the taught trained model recited my Omer. See KSR Int’l. Co. v. Teleflex, Inc., 550 U.S. 398, 418421 (2007); see also MPEP § 2144.04 VI, B.);
identifying, from a second video of [[an]] a second area including an accounting machine (Omer, [0065]; “Point of Sale (POS)”, so called “second video” and “second area” are considered no more than a predictable variation or duplication of the taught elements recited my Omer. See KSR Int’l. Co. v. Teleflex, Inc., 550 U.S. 398, 418421 (2007); see also MPEP § 2144.04 VI, B.) in the store, each of the specific user and the accounting machine (Omer, [0161]; “As shown at 414, during the checkout event of one or more of the tracked customers 204 at the POS area(s) of the store 200, the items identifier 522 may receive from the POS reader(s) 240”, [0067]; “Images depicting the POS area(s) may be analyzed to identify the tracked customers at the POS area(s) where the items picked up by the tracked customers are checked out, i.e., the identifiers of the items are read by one or more of the POS readers”), based on the stored person feature information (Omer, [0105]; “The training dataset generation system 202 may include … storage 214 for storing code (program store) and/or data … imaging sensor(s) 230 may be deployed in the store 200 to detect and track one or more customers 204 which may enter the store 200, exit the store 200, move in the interior space of the store 200, interact with one or more of the items offered for sale in the store 200, check out items they wish to purchase and/or the like”);
storing information on the specific user and information on the accounting machine into a storage in association with each other; (Omer, [0067-0070]; “The tracked customers may be associated with checkout events … As such the identifier(s) read from item(s) in one or more of the checkout events may be correlated with the items picked up by the respective tracked customers … After correlating an item with its respective identifier, each captured image which is determined to depict the item may be labeled with the respective identifier and may be added to the labeled dataset”, see [0103-0105], see [0157-0161])
receiving a purchase history (Omer, [0160]; “shopping list”) transmitted from the accounting machine that has been associated with the specific user; (Omer, [0160]; “at 412, the items identifier 522 may output the shopping list of one or more of the tracked customers 204”)
identifying a merchandise item included in the purchase history from the accounting machine (Omer, [0160]; “the identifier 522 may output the shopping list in correlation with the respective tracked customer 204 such that the items collected by each tracked customer 204 are correlated with the respective tracked customer 204”); and
such that a difference between a predicted value and an observed value that is the number of pieces on the merchandise item included in the purchase history is minimized (Omer, [0082-0085]; “Key Performance Indicators (KPI) may be calculated for the shopping list compared to the checkout list … based on one or more of the calculated KPIs and/or based on the statistics analysis, one or more recommendations may be automatically generated for improving the estimation performance of the trained supervised ML model(s). The recommendations may relate to one or more aspects of the items and/or of the images captured to depict the items, for example, presentation of the items in the store”), the predicted value being an addition of a number of pieces of merchandise picked up by the specific user from each of the shelves (Omer, [0090]; Supervised ML models may typically be trained to estimate (predict) items which are represented in the training dataset”, [0043]; “Performance Indicators (KPI) measured for the comparison. The KPIs comprise … a percentage of false negative estimations, a percentage of incorrect estimations of a number of items in one or more shopping lists and/or mapping of incorrect single item estimations of one item as another”), by using a processor (Omer, [0162-0164]; “at 416, for one or more of the checkout events, the items identifier 522 may compare between the shopping list and the checkout list created for the respective tracked customer 204 … Based on the matching the timestamp received from the POS reader(s) 240 for the checkout event and the timestamp of the image(s) depicting the tracked customer 204 in the POS area(s)”)
Omer substantially discloses the claimed invention; however, Omer fails to explicitly disclose the “a resolution insufficient”. However, Glaser teaches:
of a resolution insufficient (Glaser, [0130]; “the method may not be dependent on precise, exact information, and thus the collection of image data may be from an existing video surveillance system … The image data can include high resolution video, low resolution video”)
Therefore, it would have been obvious to one of ordinary skill in the analyzing customer and purchase data art before the effective filing date to modify Omer to include insufficient resolution video/image, as taught by Glaser, where this would be performed in order to enable the utilization of variety of imaging systems. See Glaser [0131].
The combination of Omer in view of Glaser substantially discloses the claimed invention; however, the combination fails to explicitly disclose the “identifying, by solving for an unknown association between each shelf and a corresponding merchandise item, the merchandise item to be associated with the shelf”. However, Adato teaches:
identifying, by solving for an unknown association between each shelf and a corresponding merchandise item, the merchandise item to be associated with the shelf
(Adato [0334]; “In step 1705, the method may comprise selecting a product model subset from among the set of product models based on at least one characteristic of the at least one store shelf and based on analysis of the at least one image … identify characteristic of the at least one shelf in the image … image processing unit 130 may identify the location information … Based on the characteristics, server 135 may determine the category of product that the shelf is for. For example, server 135 may determine the shelf for soft drinks, the shelf for cleaning products, the shelf for personal products, and/or the shelf for books, or the like” … [0340]; “In step 1717, the method may comprise identifying the at least one product based on the analysis of the representation of the at least one product depicted in the at least one image in comparison to the updated product model subset”)
Therefore, it would have been obvious to one of ordinary skill in the analyzing customer and purchase data art before the effective filing date to modify Omer to include identifying, by solving for an unknown association between each shelf and a corresponding merchandise item, the merchandise item to be associated with the shelf, as taught by Adato, where this would be performed in order to increase productivity, among other potential benefits, there is a technological need to provide a dynamic solution that will automatically monitor retail spaces. See Adato [0003].
Regarding claims 7 and similarly 3 and 11. The combination of Omer in view of Glaser further in view of Adato disclose the information processing method according to claim 5, further including: setting a predicted value vector and an observed value vector having an element that is the observed value, the predicted value vector resulting from multiplication of a variable vector by a matrix having an element that is the number of times the action of the specific user picking up a piece of merchandise form a shelf among the shelves has been performed (Omer, [0153]; “output the identifier (identity) of each identified item typically with a probability value indicating a probability of a correct identification. The trained supervised ML model(s) 520 may further output a respective one of a plurality of feature vectors each created as known in the art each for a respective one of the supported items that the trained supervised ML model(s) 520 is trained to identify”), the variable vector having an element that is a variable related to a combination of a shelf and a predetermined merchandise item (Omer, [0156]; “Based on the mapping of the feature vector(s) of the unsupported item(s) with respect to the feature vectors of the supported item(s), the items identifier 522 may estimate the identifier of the unsupported item”);
calculating the variable vector that minimizes a value resulting from subtraction of the observed value vector from the predicted value vector (Omer, [0156]; “the items identifier 522 may measure the distance between the feature vector(s) computed for one or more of the unsupported items and the feature vector(s) of one or more of the supported item(s)”); and
identifying, based on values of elements of the variable vector calculated, the merchandise item to be associated with the shelf (Omer, [0156]; “the items identifier 522 may estimate the identifiers of unsupported (new) items without re-training the trained supervised ML model(s) 520 to identify these unsupported items”).
Regarding claims 8 and similarly 4 and 12. The combination of Omer in view of Glaser further in view of Adato disclose the information processing method according to claim 5, further including identifying, based on the purchase history, a merchandise item commonly purchased by plural users who have picked up the merchandise item from the same shelf, the commonly purchased merchandise item being a merchandise item acquired from the same shelf (Omer, [0165]; “the items identifier 522 may compute one or more of the KPIs … a subset of items displayed at certain display areas in the store 200, … a certain subset of items may include items purchased by a certain type of customers 204 (e.g. children, elderly people, families, etc.). In another example, a certain subset of items may include items purchased by customers 204 during a certain time of the day (e.g. morning, evening, etc.)”).
Claims 2, 6 and 10 are rejected under 35 U.S.C 103 as being unpatentable over Omer in view of Glaser further in view of Adato furthermore in view of MOHANAKRISHNAN et al. (US 2018/0253708 A1, hereinafter “MOHANAKRISHNAN”).
Regarding claims 6 and similarly 2 and 10. The combination of Omer in view of Glaser further in view of Adato disclose the information processing method according to claim 5, further including: setting a variable related to a combination of a shelf among the shelves and a predetermined merchandise item (Omer, [0165]; “the items identifier 522 may compute one or more of the KPIs … for one or more subsets of the plurality of items offered for sale in the store 200 … such subsets of items may comprise items sharing one or more product attributes … for example, size, color, shape, display location”);
The combination substantially discloses the claimed invention; however, the combination fails to explicitly disclose the “calculating a value of the variable, based on simultaneous equations where the predicted value equals the observed value, the predicted value resulting from multiplication of the variable by the number of times the action of the specific user picking up a piece of merchandise from a shelf among the shelves has been performed, and identifying, as the merchandise item to be associated with the shelf, a predetermined merchandise item that forms a pair with the shelf related to a variable for which the calculated value becomes equal to or larger than a threshold”. However, MOHANAKRISHNAN teaches:
calculating a value of the variable, based on simultaneous equations where the predicted value equals the observed value, the predicted value resulting from multiplication of the variable by the number of times the action of the specific user picking up a piece of merchandise from a shelf among the shelves has been performed (MOHANAKRISHNAN, [0110]; “When for example the length of a difference vector, which represents a difference between two feature vectors, is smaller than a threshold, the identification information setting unit 203 can determine that these two feature vectors are similar to each other. When the length of a difference vector is equal to or greater than a threshold, the identification information setting unit 203 determines that these feature vectors are not similar to each other”); and
identifying, as the merchandise item to be associated with the shelf, a predetermined merchandise item that forms a pair with the shelf related to a variable for which the calculated value becomes equal to or larger than a threshold (MOHANAKRISHNAN, [0111]; “When the records in the past prescribed period of time include a similar feature vector (YES in step 1603), the identification information setting unit 203 generates a new record of candidate information and stores the record in the candidate information storage unit 111 (step 1604)”).
Therefore, it would have been obvious to one of ordinary skill in the analyzing customer and purchase data art before the effective filing date to modify Omer to include calculating a value of the variable, based on simultaneous equations where the predicted value equals the observed value, the predicted value resulting from multiplication of the variable by the number of times the action of the specific user picking up a piece of merchandise from a shelf among the shelves has been performed, and identifying, as the merchandise item to be associated with the shelf, a predetermined merchandise item that forms a pair with the shelf related to a variable for which the calculated value becomes equal to or larger than a threshold, as taught by MOHANAKRISHNAN, where this would be performed in order to introduced automated checkout machines with ability that reduce any Point Of Sale (POS) registers manipulation and to reduce waiting time for POS registers. See MOHANAKRISHNAN [0003].
Conclusion
1. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
2. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Baboo et al. (US 8412656 B1) for teaching association of behaviors and sales data.
Sharma et al. (US 8295597 B1) for teaching automatic behavior analysis.
Fallin et al. (US 2004/0164863 A1) for teaching correlation of POD and alarm-event data.
TSUCHIMOCHI et al. (US 20170068945 A1) for teaching associating store shelf identifier with the commodity identifiers in the store shelf information.
3. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVIA SALMAN whose telephone number is (313)446-4901. The examiner can normally be reached Monday thru Friday; 9:00 AM to 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAHD OBEID can be reached at (571) 270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AVIA SALMAN/Primary Patent Examiner, Art Unit 3627