Prosecution Insights
Last updated: April 19, 2026
Application No. 18/754,050

IMAGE PROCESSING APPARATUS, SERVER DEVICE, AND METHOD THEREOF

Non-Final OA §103
Filed
Jun 25, 2024
Examiner
LEVINE, ADAM L
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toshiba TEC Kabushiki Kaisha
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
76%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
178 granted / 500 resolved
-16.4% vs TC avg
Strong +41% interview lift
Without
With
+40.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
37 currently pending
Career history
537
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
23.1%
-16.9% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 500 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority The USPTO has retrieved certified copies of papers required by 37 CFR 1.55 to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d). These papers have been placed of record in the file. A certified English translation is not currently required and has not been filed. Filing of a certified English translation may become necessary during prosecution of this application, such as in the event of an interference or intervening reference. Applicant is advised that should a certified English translation be required, a certified English translation of the foreign application must be submitted in order for applicant to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d). See 37 CFR 41.154(b) and 41.202(e) or 37 CFR 1.55 and MPEP § 201.15, respectively. Information Disclosure Statement (IDS) The information disclosure statement filed June 25, 2024, fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. A copy is not required if the information was previously submitted in a prior application, provided that the prior application is properly identified in the IDS and is relied on for an earlier filing date under 35 U.S.C. 120, and the IDS submitted in the earlier application complies with 37 CFR 1.98(a)-(c). See 37 CFR 1.98(d). Applicant is not required to submit an IDS in a continuation application listing documents that were previously provided in a properly compliant IDS that was considered during prosecution of the prior application. In this case an IDS has been submitted, the cited foreign patent documents and non-patent literature documents were not provided, and the IDS did not indicate previous submission of the documents in an identified prior application. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-9, and 11-14, are rejected under 35 U.S.C. 103 as being unpatentable over Kobres et al. (Patent No. US 9,473,747 B2) in view of WU (Pub. No. US 2019/0228457 A1). Kobres teaches a method and store system that includes a checkout apparatus installed near an exit of a store, detects a customer and performs a checkout process using object recognition to determine items to be purchased by the customer. Kobres discloses with regard to Claim 1. A store system comprising: ● a store server (see at least Kobres figs. 4-5): and ● a checkout apparatus installed near an exit of a store and configured to detect a customer and perform a checkout process using object recognition to determine items to be purchased by the customer (see at least Kobres abstract “cart check and shelf check cameras monitor additions to carts and removals from shelves along with knowledge of location and what is on a particular shelf are employed to analyze which products are selected for purchase”), wherein the store server includes: ● a memory (see at least Kobres fig. 4), ● a network interface configured to communicate with the checkout apparatus (see at least Kobres fig.4) and a sensor (see at least Kobres c1:25-35 “sensing arrangements, such as RFID sensing, have also been proposed in this context. As an example of a smart shelf arrangement, various arrangements have been addressed where, as an item is removed from a shelf, the removal is sensed”. Please note: a camera is also a sensor.) and a camera (see at least Kobres figs. 4-5) that are disposed at a particular location in the store, and a processor (see at least Kobres c3:20-30). Kobres teaches all of the above, and all of the below, as noted. It teaches, a) item identification by location, b) image recognition of customer, c) customer selection of items in locations, and d) using item and location data to assist checkout, but does not explicitly disclose upon receipt of a signal from the sensor, identify the particular location based on the signal, acquire an image imaged by the camera, and identify the customer based on the image. Wu also teaches a) item identification by location, b) image recognition of customer, c) customer selection of items in locations, and d) using item and location data to assist checkout, and Kobres in view of Wu further discloses ● upon receipt of a signal from the sensor, identify the particular location based on the signal, acquire an image imaged by the camera, and identify the customer based on the image (see at least Kobres figs.5, c3:5-20 “Memory 415 will preferably store a location identifier or a camera identifier associated with the camera location” in view of Wu ¶0006 “in case that the identity information of the customer represented by an image containing the customer, which is shot by the forward camera, is the same as the identity information acquired in the step (S1), judging that the position of the customer is consistent with the position of the item”). Therefore it would have been obvious to one of ordinary skill in the art at the time of invention (for pre-AIA applications) or filing (for applications filed under the AIA ) to modify the method of Kobres to include upon receipt of a signal from the sensor, identify the particular location based on the signal, acquire an image imaged by the camera, and identify the customer based on the image, as taught by Wu since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of the combination were predictable and would result in an improvement. This is because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such features even from a variety of technical fields into methods and systems implemented using similar technological structures (i.e., generic computer and/or network hardware such as processors, servers, etc.). In this case the areas of technical endeavor are nonetheless similar and overlapping. Applicant has not disclosed that the added feature solves any stated problem or is for any particular purpose beyond the performance of the functions they performed separately and since each element and its function are shown in the prior art the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. It would therefore have been an obvious matter of design choice to include the feature from Wu in the method of Kobres. Furthermore the combination solved no long felt need. Incorporating cumulative known features is additionally obvious to one of ordinary skill in the art because doing so increases commercial use of a method by attracting users that previously might have chosen between one of the previously known methods. Kobres in view of Wu further discloses ● acquire item information indicating one or more items displayed at the particular location (see at least Kobres abstract “knowledge of location and what is on a particular shelf,” c4:7-20 “system uses the context of the location, such as planogram information for an associated shelf camera for an associated cart camera, and the change to the "picture" of the items at rest to narrow the list of possible items,” c3:30-52 “Memory 485 will preferably store a table of items on the shelf…. processor 483 to attempt to recognize the item from a small number of items associated with that shelf location”), ● store in the memory the item information in association with the customer (see at least Kobres abstract “to allow consumers to purchase items in a store with no need to checkout at a traditional checkout lane. … Customer analytic data, as well as, store inventory data are preferably also developed from the camera image data,” figs. 4, 5B-C), and ● in response to a request indicating the customer from the checkout apparatus, control the network interface to transmit the item information to the checkout apparatus (see at least Wu abstract “generating a shopping list of the customer after identifying the take-up action or the put-back action, and the item at which the take-up action or the put-back action aims; and performing checkout of the shopping list,” ¶0010 “a shopping list generation module configured to be connected with the real-time tracking module and configured to … associate the customer with …the item, and generate a shopping list of the customer …; and a checkout module configured to be connected with the shopping list generation module and configured to perform checkout of the shopping list generated by the shopping list generation module”), and ● the checkout apparatus is configured to, when performing the checkout process, determine the items indicated by the item information received from the store server as candidate items for purchase by the customer (see at least Kobres figs. 5B, 5D, c4:7-20 “system uses the context of the location, such as planogram information for an associated shelf camera for an associated cart camera, and the change to the "picture" of the items at rest to narrow the list of possible items…. and maintain a view of it throughout the remainder of the shopping trip” in view of Wu ¶0010 as cited above.).Claim 2. The store system according to claim 1, further comprising: ● the sensor that is configured to issue the signal when detecting the customer (see at least Wu “identifying a pre-registered customer to acquire an identity information of the customer, the identity information containing face data … tracking the customer whose identity information has been acquired, in a shopping place in real time, and acquiring a position of the customer”).Claim 4. The store system according to claim 2, wherein the sensor is attached to a predetermined division of a shelf in the store (see at least Kobres figs.1 shelf camera placement, 5C “local shelf cameras”. Please note: as indicated previously Kobres includes cameras in its definition of sensors, as registering a presence in addition to capturing an image.).Claim 5. The store system according to claim 4, wherein ● the memory stores a first table by which a sensor ID of the sensor is associated with the predetermined division of the shelf (see at least Kobres c3:35-50 “a table of items on the shelf in the field of view of digital imager 491 in addition to a camera identifier associated with the location of shelf camera 410. The table will typically be downloaded from the server 450 which downloads it from planogram data 462 in database 460, and updates data as changes occur in the planogram data”), and ● the processor is configured to acquire the sensor ID from the received signal and then search the first table for the predetermined division of the shelf (see at least Kobres fig. 5B, 3:5-20 “Memory 415 will preferably store a location identifier or a camera identifier associated with the camera location so… server 450 can immediately place digital image data forwarded therefrom in the overall framework in the store of the store system database,” c3:35-50 “As such, when shelf camera 410 detects a customer taking an item from the shelf, the software 487 can control the processor 483 to attempt to recognize the item from a small number of items associated with that shelf location”).Claim 6. The store system according to claim 5, wherein ● the memory stores a second table by which said one or more items are associated with the predetermined division of the shelf (see at least Kobres c2:50-60 “planogram data associating given store shelves with particular products”), and ● the processor is configured to search the second table for said one or more items displayed on the predetermined division of the shelf (see at least Kobres c3:35-50 “software 487 can control the processor 483 to attempt to recognize the item from a small number of items associated with that shelf location).Claim 7. The store system according to claim 4, further comprising: ● the camera that is attached to the shelf (see at least Kobres figs.1 shelf camera, 5C, c2:60-67 “an array of cameras including cart cameras … and shelf cameras”).Claim 8. A method performed by a store system that includes a store server and a checkout apparatus installed near an exit of a store and configured to detect a customer and perform a checkout process using object recognition to determine items to be purchased by the customer, the method comprising: ● by the store server, receiving a signal from a sensor, and identifying a particular location in the store based on the signal (see at least Kobres figs.5, c3:5-20 “Memory 415 will preferably store a location identifier or a camera identifier associated with the camera location” in view of Wu ¶0006 “in case that the identity information of the customer represented by an image containing the customer, which is shot by the forward camera, is the same as the identity information acquired in the step (S1), judging that the position of the customer is consistent with the position of the item”); ● acquiring an image from a camera disposed at the particular location and identifying the customer based on the image (see at least Wu ¶0006 “in case that the identity information of the customer represented by an image containing the customer, which is shot by the forward camera, is the same as the identity information acquired in the step (S1), judging that the position of the customer is consistent with the position of the item”); ● acquiring item information indicating one or more items displayed at the particular location (see at least Kobres abstract “knowledge of location and what is on a particular shelf,” c4:7-20 “system uses the context of the location, such as planogram information for an associated shelf camera for an associated cart camera, and the change to the "picture" of the items at rest to narrow the list of possible items,” c3:30-52 “Memory 485 will preferably store a table of items on the shelf…. processor 483 to attempt to recognize the item from a small number of items associated with that shelf location”); ● storing in a memory the item information in association with the customer (see at least Kobres abstract “to allow consumers to purchase items in a store with no need to checkout at a traditional checkout lane. … Customer analytic data, as well as, store inventory data are preferably also developed from the camera image data,” figs. 4, 5B-C); and ● receiving a request indicating the customer from the checkout apparatus, and transmitting the item information to the checkout apparatus (see at least Wu abstract “generating a shopping list of the customer after identifying the take-up action or the put-back action, and the item at which the take-up action or the put-back action aims; and performing checkout of the shopping list,” ¶0010 “a shopping list generation module configured to be connected with the real-time tracking module and configured to … associate the customer with …the item, and generate a shopping list of the customer …; and a checkout module configured to be connected with the shopping list generation module and configured to perform checkout of the shopping list generated by the shopping list generation module”); and ● when performing the checkout process by the checkout apparatus, determining the items indicated by the item information received from the store server as candidate items for purchase by the customer (see at least Kobres figs. 5B, 5D, c4:7-20 “system uses the context of the location, such as planogram information for an associated shelf camera for an associated cart camera, and the change to the "picture" of the items at rest to narrow the list of possible items…. and maintain a view of it throughout the remainder of the shopping trip” in view of Wu ¶0010 as cited above.).Claim 9. The method according to claim 8, further comprising: ● by the sensor, detecting the customer and then issuing the signal (see at least Wu “identifying a pre-registered customer to acquire an identity information of the customer, the identity information containing face data … tracking the customer whose identity information has been acquired, in a shopping place in real time, and acquiring a position of the customer”).Claim 11. The method according to claim 9, wherein the sensor is attached to a predetermined division of a shelf in the store (see at least Kobres figs.1 shelf camera placement, 5C “local shelf cameras”.).Claim 12. The method according to claim 11, further comprising: ● storing in the memory a first table by which a sensor ID of the sensor is associated with the predetermined division of the shelf (see at least Kobres c3:35-50 “a table of items on the shelf in the field of view of digital imager 491 in addition to a camera identifier associated with the location of shelf camera 410. The table will typically be downloaded from the server 450 which downloads it from planogram data 462 in database 460, and updates data as changes occur in the planogram data”); and ● acquiring the sensor ID from the received signal and then searching the first table for the predetermined division of the shelf (see at least Kobres fig. 5B, 3:5-20 “Memory 415 will preferably store a location identifier or a camera identifier associated with the camera location so… server 450 can immediately place digital image data forwarded therefrom in the overall framework in the store of the store system database,” c3:35-50 “As such, when shelf camera 410 detects a customer taking an item from the shelf, the software 487 can control the processor 483 to attempt to recognize the item from a small number of items associated with that shelf location”).Claim 13. The method according to claim 12, further comprising: ● storing in the memory a second table by which said one or more items are associated with the predetermined division of the shelf (see at least Kobres c2:50-60 “planogram data associating given store shelves with particular products”); and ● searching the second table for said one or more items displayed on the predetermined division of the shelf (see at least Kobres c3:35-50 “software 487 can control the processor 483 to attempt to recognize the item from a small number of items associated with that shelf location”).Claim 14. The method according to claim 11, wherein the camera is attached to the shelf (see at least Kobres figs.1 shelf camera, 5C, c2:60-67 “an array of cameras including cart cameras … and shelf cameras”). Kobres in view of Wu teaches all of the above, and all of the below, as noted. It teaches, a) item identification by location, b) image recognition of customer, c) customer selection of items in locations, and d) using item and location data to assist checkout, but does not explicitly disclose wherein the sensor is an infra-red sensor. Reid also teaches a) item identification by location, b) image recognition of customer, c) customer selection of items in locations, and d) using item and location data to assist checkout, and further discloses, pertaining to Claim 3. The store system according to claim 2, wherein the sensor is an infra-red sensor (see at least Reid ¶0026 “sensor itself can be based on infrared”). Claim 10. The method according to claim 9, wherein the sensor is an infra-red sensor (see at least Reid ¶0026 “sensor itself can be based on infrared”). Therefore it would have been obvious to one of ordinary skill in the art at the time of invention (for pre-AIA applications) or filing (for applications filed under the AIA ) to modify the method of Kobres in view of Wu to include wherein the sensor is an infra-red sensor, as taught by Reid since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of the combination were predictable and would result in an improvement. This is because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such features even from a variety of technical fields into methods and systems implemented using similar technological structures (i.e., generic computer and/or network hardware such as processors, servers, etc.). In this case the areas of technical endeavor are nonetheless similar and overlapping. Applicant has not disclosed that the added feature solves any stated problem or is for any particular purpose beyond the performance of the functions they performed separately and since each element and its function are shown in the prior art the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. It would therefore have been an obvious matter of design choice to include the feature from Reid in the method of Kobres in view of Wu. Furthermore the combination solved no long felt need. Incorporating cumulative known features is additionally obvious to one of ordinary skill in the art because doing so increases commercial use of a method by attracting users that previously might have chosen between one of the previously known methods. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ● Ghoson et al., Pub. No.: US 2016/0292507 A1: teaches capturing an image with a user's mobile device and identifying an object in the image by accessing an object database comprising objects in the location of the device. ● Landers, JR. et al., Pub. No.: US 2010/0002902 A1: teaches identification of item by server based on image of the item taken by a camera at the user location. ● TSUCHIMOCHI et al., Pub. No.: US 2017/0068945 A1: teaches detecting movement trajectories of customers in a store by using images taken by at least one image-pickup apparatus. A unit identifies a customer who is about to make a purchase, a commodity recognition unit recognizes the commodity. Commodities displayed at positions corresponding to the detected movement trajectory for the customer identified are set as candidates. ● RAJAPPA et al., Pub. No.: US 2016/0275352 A1: teaches object recognition by matching object in an image with object images in a database. uses device location to verify by using known object location information in the database. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM LEVINE whose telephone number is (571)272-8122. The examiner can normally be reached Monday - Thursday 9am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at 571.272.6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM L LEVINE/Primary Examiner, Art Unit 3689 December 27, 2025
Read full office action

Prosecution Timeline

Jun 25, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597045
MANAGING VEHICLE OPERATOR PROFILES BASED ON TELEMATICS INFERENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12548053
ACCOUNT MANAGER VIRTUAL ASSISTANT USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Feb 10, 2026
Patent 12548067
WEBSITE TRACKING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12544671
METHOD AND APPARATUS FOR TRAINING RECOMMENDATION MODEL, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12547994
SYSTEMS AND METHODS FOR ESTABLISHING MESSAGE ROUTING PATHS THROUGH A COMPUTER NETWORK
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
76%
With Interview (+40.8%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 500 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month