Prosecution Insights
Last updated: April 19, 2026
Application No. 18/109,384

LIVE INVENTORY AND INTERNAL ENVIRONMENTAL SENSING METHOD AND SYSTEM FOR AN OBJECT STORAGE STRUCTURE

Non-Final OA §103
Filed
Feb 14, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Springhouse Technologies Inc.
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. Claim Status Claims 1-7, 9-14, 16-24, and 26-27 are pending for examination in the application filed 12/10/2025. Claims 1, 9, 11, 16, 18, and 26 have been amended and claims 8, 15, and 25 have been cancelled. Priority Acknowledgement is made of Applicant’s claim to priority of provisional application 63309785, filing date 02/14/2022. Response to Arguments and Amendments The 35 U.S.C. 112(b) rejections of claims 8-10, 15-17, and 25-27 are withdrawn in light of the claim amendments and cancellations. Applicant's arguments filed 12/10/2025 regarding Ueda have been fully considered but they are not persuasive. Applicant argues on pages 10-12 of the Remarks filed 12/10/2026 that Ueda does not teach the amended limitation of claim 1: “the ML model trained, at least in part, using a crowd-based training method in which image data received from the refrigerator and other refrigerators is stored in a database in communication with the refrigerator and the other refrigerators via the cloud and the image data is trained into the trained ML model”. Applicant specifically argues on page 11 that the pre-learned image in Ueda “strongly suggests that the pattern images exist before the refrigerators are deployed to end users, not that they are acquired from other refrigerators in the field”. Ueda specifically states that the pre-learned image is processed by refrigerator 1: [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image. Ueda also teaches: [0013] The inventory management system 100 includes refrigerators 1, 1a, 1b…The number and types of the refrigerator 1 and the communication terminal 3 are not limited, and when it is not necessary to explain them individually, the refrigerator 1 and the communication terminal 3 are collectively used. Further, the number of user homes managed by the cloud server 2 is not limited. Figures 1-2 of Ueda show the system: PNG media_image1.png 790 671 media_image1.png Greyscale PNG media_image2.png 656 951 media_image2.png Greyscale Figure 1 demonstrates the system of refrigerators 1, 1a, 1b, including storage unit 11. Figure 2 demonstrates the connection of refrigerators 1, 1a, 1b to the cloud server 2 via the wide area communication network 4. Additionally, Ueda states [0016] The cloud server 2 appropriately provides the refrigerator 1 with information necessary for the refrigerator 1 to execute each of the above functions. Further, the cloud server 2 collects information (inventory information, etc.) of the refrigerator 1 as necessary, and distributes the collected information to the communication terminal 3 of the user of the refrigerator 1. Thus, Ueda teaches “the ML model trained, at least in part, using a crowd-based training method in which image data received from the refrigerator and other refrigerators is stored in a database in communication with the refrigerator and the other refrigerators via the cloud and the image data is trained into the trained ML model” because crowd-based image data, received from the refrigerators (pre-learned images), is stored in a database (storage unit 11) that is in communication with the other refrigerators via the cloud (Fig. 2). Please see below for the entire amended 35 USC § 103 rejection. Applicant’s arguments with respect to Guack have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claim 16 as amended is objected to because of the following informalities: “The computer readable storage medium of claim 1”. “Claim 1” should be “Claim 11”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 9, 11-14, 16, 18-23, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (JP6938116B2) in view of Zhang (CN112242940A). Regarding claim 1, Ueda teaches a method for dynamically identifying an object being placed in or taken out of a refrigerator ([0006] The present invention has been made in view of the above problems, and an object of the present invention is to realize an inventory management device and an inventory management method suitable for practical use. [0007] Based on the sensor value of the sensor to be detected, the first image showing the hand moving from the outside to the inside of the refrigerator and the second image showing the hand moving from the inside to the outside of the refrigerator are acquired. When the goods are shown in the first image and the goods are not shown in the second image, the delivery determination unit determines that the goods are in stock, and the goods are not shown in the first image. When an article is shown in the image, it is judged that the article has been delivered), the method comprising: detecting a motion of an object at the refrigerator using one or more sensors coupled with the refrigerator or sensing the refrigerator is open ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); acquiring one or more images of at least a part of the object as the object is being placed inside the refrigerator or removed from the refrigerator ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger); and using the acquired images, tracking the motion of the object, determining a direction of the motion of the object ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and identifying the object using a trained ML (Machine Learning) model ([0033] The item identification unit 23 identifies an item determined to have been received or delivered, and generates item identification information for identifying the specified item. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image), the ML model trained, at least in part, using a crowd-based training method in which image data received from the refrigerator and other refrigerators is stored in a database in communication with the refrigerator and the other refrigerators via the cloud and the image data is trained into the trained ML model ([0013] The inventory management system 100 includes refrigerators 1, 1a, 1b, ... For storing food, and a cloud server 2 that communicates with the refrigerator 1 via a wide area communication network 4. The inventory management system 100 may further include a small communication terminal 3 carried by the user, if necessary. The communication terminal 3 is, for example, a smartphone, a tablet terminal, a mobile phone, or the like, and is carried by a user. The cloud server 2, the communication terminal 3, and the refrigerator 1 are configured to be connected via the wide area communication network 4. The number and types of the refrigerator 1 and the communication terminal 3 are not limited, and when it is not necessary to explain them individually, the refrigerator 1 and the communication terminal 3 are collectively used. Further, the number of user homes managed by the cloud server 2 is not limited. [0016] In the inventory management system 100 according to the present embodiment, the refrigerator 1 has a function of determining warehousing / delivery of goods and a function of inventory management, and executes the functions. The cloud server 2 appropriately provides the refrigerator 1 with information necessary for the refrigerator 1 to execute each of the above functions. Further, the cloud server 2 collects information (inventory information, etc.) of the refrigerator 1 as necessary, and distributes the collected information to the communication terminal 3 of the user of the refrigerator 1. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image). PNG media_image1.png 790 671 media_image1.png Greyscale PNG media_image2.png 656 951 media_image2.png Greyscale Ueda does not teach the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image. Zhang, in the same field of endeavor of inventory tracking, teaches the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image ([pg. 9] Specifically, the whole system is divided into an imaging module, a labeling module, a training module, an identification module, a track tracking module and a behavior judging module. [pg. 8] Preferably, in step 4), the tracking method of the food comprises: Determining the in-out direction of the food by the position of the camera. [pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network…Meanwhile, the video streaming server forwards the processed image data to the identification server). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Ueda with the teachings of Zhang to use a first model to track the object and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Regarding claim 2, Ueda and Zhang teach the method of claim 1. Ueda further teaches based on the direction determination, determining whether the object is being added to or removed from the refrigerator ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)). Regarding claim 3, Ueda and Zhang teach the method of claim 1. Ueda further teaches updating a live inventory record associated with the objects within the refrigerator based on the identified object and the determined direction ([0083] (Item identification process by the item identification unit 23) The item identification unit 23 identifies the item included in the image with respect to the image determined by the warehousing / delivery determination unit 22 to have the item. Specifically, in order to manage the inventory of the above-mentioned item determined to be present, the item identification information for identifying the item is generated. [0084] For the image determined to have an item, the feature amount extraction unit 21 specifies the area of the item based on the feature amount extracted in advance. Therefore, the item identification unit 23 can search the pattern storage unit 30 for a pattern image of the item that matches the feature amount of the area of the item. Then, the item identification unit 23 reads out the name of the item associated with the searched pattern image and adopts this as the item identification information of the item. For example, when the apple area shown in FIG. 12B is compared with the pattern image and the apple pattern image can be searched, the item identification unit 23 uses the text associated with the pattern image. The data "apple" is associated with the image determined to have the above item and notified to the inventory management unit 24). Regarding claim 4, Ueda and Zhang teach the method of claim 1. Ueda further teaches on a smart device, displaying information based on the live inventory record ([0114] (Outline of Inventory Management System) FIG. 2 is a diagram showing an outline of the inventory management system 300 according to the third embodiment of the present invention. In one embodiment of the present invention, the inventory management device of the present invention is an inventory management device that manages the inventory of a refrigerator that stores items such as food, and the inventory management device is realized as a refrigerator 301. [0121] As shown in FIG. 23, the refrigerator 301 includes at least a control unit 310, a storage unit 311, a weight sensor 314, and a microphone 315, and if necessary, further includes a communication unit 312, a sensor 313, and a speaker 316. And display 317 may be provided. The communication unit 312 includes the above-mentioned home appliance adapter and performs mutual communication with the cloud server 302 via the wide area communication network 4. The microphone 315 acquires the voice input by the user to the control unit 310. The voice acquired by the microphone 315 is converted into input voice data by a voice control unit (not shown), and voice recognition processing is performed by the voice recognition unit 320. The speaker 316 outputs the output voice data processed by the voice control unit as the voice heard by the user. The display 317 displays various data stored in the storage unit 311 so that the user can see it. For example, the display 317 is composed of a display device such as an LCD (liquid crystal display) or an organic ELD (electroluminescence display). Regarding claim 5, Ueda and Zhang teach the method of claim 1. Ueda further teaches detecting a hand moving through an entrance area of the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); determining whether the hand is carrying the object; determining what direction the hand is moving ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and based on the hand direction determination and the determination as to whether the hand is carrying the object, updating the live inventory record for the refrigerator ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Regarding claim 6, Ueda and Zhang teach the method of claim 1. Ueda further teaches detecting a presence of the object within scanning range of a scanner coupled with the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); scanning the object using the scanner ([0023] For example, when the sensor 13 is an infrared sensor and it is desired to detect whether or not a hand has passed through the opening of the refrigerating chamber 1c, the sensor 13 is a refrigerating chamber as shown in FIGS. 3 (a) and 3 (b)); obtaining scanning data from the scanner indicating a characteristic of the object ([0030] Specifically, the image acquisition unit 20 specifies an imaging timing at which a still image should be obtained based on the sensor value supplied from the sensor 13); and storing characteristic information about the object based on the scanning data ([0061] (About the image database) FIG. 10 is a diagram showing an example of the data structure of the image database stored in the image storage unit 31). Regarding claim 9, Ueda and Zhang teach the method of claim 1. Zhang teaches wherein the first model set resides on local hardware operatively associated with the refrigerator ([pg. 8] According to the flow of fig. 1, a camera installed in a storage space such as a refrigerator and a locker by a user is used for shooting and moving detection of articles such as food, when articles are stored in or taken out of the storage space) and the second model set resides on a remote cloud based system ([pg. 8] when articles are stored in or taken out of the storage space, the camera initiates a movement detection signal to a cloud server. [pg. 18] a camera installed on the refrigerator can capture the food picture and send the food picture to a server for visual identification), the first model set sorting the sequence of images to determine if the object went in or out of the refrigerator ([pg. 7]), PNG media_image3.png 278 1145 media_image3.png Greyscale and then the local hardware sending one or more of the sequence of images to the second model set to identify the object ([pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network, the relationship between the camera and the video streaming servers is N to 1, and each video streaming server receives the data uploaded by N cameras. Meanwhile, the video streaming server forwards the processed image data to the identification server, the relation between the video streaming server and the identification server is 1 to N, the video streaming server is used as an intermediate server, on one hand, the video streaming server is connected with a camera of a terminal, and the image data transmitted by different cameras are buffered and primarily processed. On the one hand, the image data are connected with the identification servers, and the image data are distributed and scheduled to different identification servers according to the processing progress of the identification servers. As shown in fig. 4, the identification server performs data exchange with the storage server according to the processing condition in the determining process, for example, when the food is determined to be taken, the identification server takes the corresponding food from the storage server for comparison, and returns the comparison result to the storage server for storage. When the food is judged to be put, the data including the food type, the refrigerator ID, the food characteristics and the like are sent to the storage server for storage). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Ueda with the teachings of Zhang to use a first model to determine if the object went in or out of the refrigerator and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Regarding claim 11, Ueda teaches a computer readable storage medium including executable computer code embodied in a tangible form ([0166] In the latter case, the refrigerator 1 and the refrigerator 301 are a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only Memory) in which the program and various data are readablely recorded by a computer (or CPU). ) Or a storage device (these are referred to as "recording media"), a RAM (Random Access Memory) for developing the above program, and the like. Then, the object of the present invention is achieved by the computer (or CPU) reading the program from the recording medium and executing the program. As the recording medium, a "non-temporary tangible medium", for example, a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be Used) wherein the computer readable medium comprises: executable computer code operable to detect a motion of an object being placed in or taken out of a refrigerator using one or more sensors coupled with the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); executable computer code operable to acquire one or more images of at least a part of the object ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger); executable computer code operable to determine a direction of the motion of the object ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)); and executable computer code operable to identify the object based on the one or more images ([0033] The item identification unit 23 identifies an item determined to have been received or delivered, and generates item identification information for identifying the specified item), wherein the computer code uses the acquired images to track the motion of the object, determine a direction of the motion of the object ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and identify the object using a trained ML (Machine Learning) model ([0033] The item identification unit 23 identifies an item determined to have been received or delivered, and generates item identification information for identifying the specified item. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image), the ML model trained, at least in part, using a crowd-based training method in which image data received from the refrigerator and other refrigerators is stored in a database in communication with the refrigerator and the other refrigerators via the cloud and the image data is trained into the trained ML model ([0013] The inventory management system 100 includes refrigerators 1, 1a, 1b, ... For storing food, and a cloud server 2 that communicates with the refrigerator 1 via a wide area communication network 4. The inventory management system 100 may further include a small communication terminal 3 carried by the user, if necessary. The communication terminal 3 is, for example, a smartphone, a tablet terminal, a mobile phone, or the like, and is carried by a user. The cloud server 2, the communication terminal 3, and the refrigerator 1 are configured to be connected via the wide area communication network 4. The number and types of the refrigerator 1 and the communication terminal 3 are not limited, and when it is not necessary to explain them individually, the refrigerator 1 and the communication terminal 3 are collectively used. Further, the number of user homes managed by the cloud server 2 is not limited. [0016] In the inventory management system 100 according to the present embodiment, the refrigerator 1 has a function of determining warehousing / delivery of goods and a function of inventory management, and executes the functions. The cloud server 2 appropriately provides the refrigerator 1 with information necessary for the refrigerator 1 to execute each of the above functions. Further, the cloud server 2 collects information (inventory information, etc.) of the refrigerator 1 as necessary, and distributes the collected information to the communication terminal 3 of the user of the refrigerator 1. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image. See Ueda Figures 1-2 above). Ueda does not teach the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image. Zhang, in the same field of endeavor of inventory tracking, teaches the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image ([pg. 9] Specifically, the whole system is divided into an imaging module, a labeling module, a training module, an identification module, a track tracking module and a behavior judging module. [pg. 8] Preferably, in step 4), the tracking method of the food comprises: Determining the in-out direction of the food by the position of the camera. [pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network…Meanwhile, the video streaming server forwards the processed image data to the identification server). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Ueda with the teachings of Zhang to use a first model to track the object and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Regarding claim 12, Ueda and Zhang teach the medium of claim 11. Ueda further teaches executable computer code operable to determine whether the object is being added to or removed from the refrigerator based on the direction determination ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)). Regarding claim 13, Ueda and Zhang teach the medium of claim 11. Ueda further teaches executable computer code operable to update a live inventory records data for the refrigerator based on the identified object and the determined direction ([0083] (Item identification process by the item identification unit 23) The item identification unit 23 identifies the item included in the image with respect to the image determined by the warehousing / delivery determination unit 22 to have the item. Specifically, in order to manage the inventory of the above-mentioned item determined to be present, the item identification information for identifying the item is generated. [0084] For the image determined to have an item, the feature amount extraction unit 21 specifies the area of the item based on the feature amount extracted in advance. Therefore, the item identification unit 23 can search the pattern storage unit 30 for a pattern image of the item that matches the feature amount of the area of the item. Then, the item identification unit 23 reads out the name of the item associated with the searched pattern image and adopts this as the item identification information of the item. For example, when the apple area shown in FIG. 12B is compared with the pattern image and the apple pattern image can be searched, the item identification unit 23 uses the text associated with the pattern image. The data "apple" is associated with the image determined to have the above item and notified to the inventory management unit 24). Regarding claim 14, Ueda and Zhang teach the medium of claim 13. Ueda further teaches executable computer code operable to detect a hand moving through an entrance area of the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); executable computer code operable to determine whether the hand is carrying the object; executable computer code operable to determine what direction the hand is moving ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and executable computer code operable to update inventory records data for the refrigerator based on the hand direction determination and the determination as to whether the hand is carrying the object ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Regarding claim 16, Ueda and Zhang teach the medium of claim 11. Zhang teaches wherein the first model set resides on local hardware operatively associated with the refrigerator ([pg. 8] According to the flow of fig. 1, a camera installed in a storage space such as a refrigerator and a locker by a user is used for shooting and moving detection of articles such as food, when articles are stored in or taken out of the storage space) and the second model set resides on a remote cloud based system ([pg. 8] According to the flow of fig. 1, a camera installed in a storage space such as a refrigerator and a locker by a user is used for shooting and moving detection of articles such as food, when articles are stored in or taken out of the storage space, the camera initiates a movement detection signal to a cloud server. [pg. 18] a camera installed on the refrigerator can capture the food picture and send the food picture to a server for visual identification), the first model set sorting the sequence of images to determine if the object went in or out of the refrigerator ([pg. 7]), PNG media_image3.png 278 1145 media_image3.png Greyscale and then the local hardware sending one or more of the sequence of images to the second model set to identify the object ([pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network, the relationship between the camera and the video streaming servers is N to 1, and each video streaming server receives the data uploaded by N cameras. Meanwhile, the video streaming server forwards the processed image data to the identification server, the relation between the video streaming server and the identification server is 1 to N, the video streaming server is used as an intermediate server, on one hand, the video streaming server is connected with a camera of a terminal, and the image data transmitted by different cameras are buffered and primarily processed. On the one hand, the image data are connected with the identification servers, and the image data are distributed and scheduled to different identification servers according to the processing progress of the identification servers. As shown in fig. 4, the identification server performs data exchange with the storage server according to the processing condition in the determining process, for example, when the food is determined to be taken, the identification server takes the corresponding food from the storage server for comparison, and returns the comparison result to the storage server for storage. When the food is judged to be put, the data including the food type, the refrigerator ID, the food characteristics and the like are sent to the storage server for storage). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Ueda with the teachings of Zhang to use a first model to determine if the object went in or out of the refrigerator and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Regarding claim 18, Ueda teaches a live inventory system for dynamically identifying an object being placed in or taken out of a refrigerator ([0012] FIG. 2 is a diagram showing an outline of the inventory management system 100 according to the first and second embodiments of the present invention. In one embodiment of the present invention, the inventory management device of the present invention is an inventory management device that manages the inventory of a refrigerator that stores items such as food, and the inventory management device is realized as the refrigerator 1. [0007] the inventory management device according to one aspect of the present invention controls the camera based on the opening and closing of the door of the storage, and an image of the opening of the storage for loading and unloading the goods), the system comprising: a refrigerator (refrigerator 1); one or more cameras (camera 14) operatively associated with the refrigerator; one or more sensors (sensor 13) operatively associated with the refrigerator ([0022] As shown in FIGS. 3A and 3B, a sensor 13 and a camera 14 are provided on the ceiling surface in the refrigerator compartment 1c); at least one processor operatively associated with the refrigerator ([0028] The control unit 10 comprehensively controls the operation of each unit of the refrigerator 1. The control unit 10 is composed of, for example, a computer device composed of an arithmetic processing unit such as a CPU and a dedicated processor); and at least one memory circuitry operatively associated with the refrigerator, the at least one memory circuitry including a computer readable storage medium that includes computer code stored in a tangible form wherein the computer code, when executed by the at least one processor ([0166] In the latter case, the refrigerator 1 and the refrigerator 301 are a CPU that executes instructions of a program that is software that realizes each function, and a ROM (Read Only Memory) in which the program and various data are readablely recorded by a computer (or CPU) or a storage device (these are referred to as "recording media"), a RAM (Random Access Memory) for developing the above program, and the like. Then, the object of the present invention is achieved by the computer (or CPU) reading the program from the recording medium and executing the program. As the recording medium, a "non-temporary tangible medium", for example, a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. Further, the program may be supplied to the computer via an arbitrary transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program), causes the storage system to: detect a motion of an object at the refrigerator using one or more sensors coupled with the refrigerator or sensing the refrigerator is open ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); acquire one or more images of at least a part of the object as the object is being placed inside the refrigerator or removed from the refrigerator ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger); and using the acquired images, tracking the motion of the object, determining a direction of the motion of the object ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and identify the object using a trained ML (Machine Learning) model ([0033] The item identification unit 23 identifies an item determined to have been received or delivered, and generates item identification information for identifying the specified item. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image), the ML model trained, at least in part, using a crowd-based training method in which image data received from the refrigerator and other refrigerators is stored in a database in communication with the refrigerator and the other refrigerators via the cloud and the image data is trained into the trained ML model ([0013] The inventory management system 100 includes refrigerators 1, 1a, 1b, ... For storing food, and a cloud server 2 that communicates with the refrigerator 1 via a wide area communication network 4. The inventory management system 100 may further include a small communication terminal 3 carried by the user, if necessary. The communication terminal 3 is, for example, a smartphone, a tablet terminal, a mobile phone, or the like, and is carried by a user. The cloud server 2, the communication terminal 3, and the refrigerator 1 are configured to be connected via the wide area communication network 4. The number and types of the refrigerator 1 and the communication terminal 3 are not limited, and when it is not necessary to explain them individually, the refrigerator 1 and the communication terminal 3 are collectively used. Further, the number of user homes managed by the cloud server 2 is not limited. [0016] In the inventory management system 100 according to the present embodiment, the refrigerator 1 has a function of determining warehousing / delivery of goods and a function of inventory management, and executes the functions. The cloud server 2 appropriately provides the refrigerator 1 with information necessary for the refrigerator 1 to execute each of the above functions. Further, the cloud server 2 collects information (inventory information, etc.) of the refrigerator 1 as necessary, and distributes the collected information to the communication terminal 3 of the user of the refrigerator 1. [0029] The storage unit 11 stores various data processed by the refrigerator 1. For example, the storage unit 11 includes a pattern storage unit 30 for storing a pattern image (pre-learned image) used for pattern matching of the captured image. See Ueda Figures 1-2 above). Ueda does not teach the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image. Zhang, in the same field of endeavor of inventory tracking, teaches the trained ML model includes at least two trained ML model sets including a first model set used for tracking the object in the acquired images and a second model set used for identifying the object in the acquired image ([pg. 9] Specifically, the whole system is divided into an imaging module, a labeling module, a training module, an identification module, a track tracking module and a behavior judging module. [pg. 8] Preferably, in step 4), the tracking method of the food comprises: Determining the in-out direction of the food by the position of the camera. [pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network…Meanwhile, the video streaming server forwards the processed image data to the identification server). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Ueda with the teachings of Zhang to use a first model to track the object and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Regarding claim 19, Ueda and Zhang teach the system of claim 18. Ueda further teaches based on the direction determination, determining whether the object is being added to or removed from the refrigerator ([0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)). Regarding claim 20, Ueda and Zhang teach the system of claim 18. Ueda further teaches updating a live inventory record associated with the objects within the refrigerator based on the identified object and the determined direction ([0083] (Item identification process by the item identification unit 23) The item identification unit 23 identifies the item included in the image with respect to the image determined by the warehousing / delivery determination unit 22 to have the item. Specifically, in order to manage the inventory of the above-mentioned item determined to be present, the item identification information for identifying the item is generated. [0084] For the image determined to have an item, the feature amount extraction unit 21 specifies the area of the item based on the feature amount extracted in advance. Therefore, the item identification unit 23 can search the pattern storage unit 30 for a pattern image of the item that matches the feature amount of the area of the item. Then, the item identification unit 23 reads out the name of the item associated with the searched pattern image and adopts this as the item identification information of the item. For example, when the apple area shown in FIG. 12B is compared with the pattern image and the apple pattern image can be searched, the item identification unit 23 uses the text associated with the pattern image. The data "apple" is associated with the image determined to have the above item and notified to the inventory management unit 24). Regarding claim 21, Ueda and Zhang teach the system of claim 18. Ueda further teaches on a smart device, displaying information based on the live inventory record ([0114] (Outline of Inventory Management System) FIG. 2 is a diagram showing an outline of the inventory management system 300 according to the third embodiment of the present invention. In one embodiment of the present invention, the inventory management device of the present invention is an inventory management device that manages the inventory of a refrigerator that stores items such as food, and the inventory management device is realized as a refrigerator 301. [0121] As shown in FIG. 23, the refrigerator 301 includes at least a control unit 310, a storage unit 311, a weight sensor 314, and a microphone 315, and if necessary, further includes a communication unit 312, a sensor 313, and a speaker 316. And display 317 may be provided. The communication unit 312 includes the above-mentioned home appliance adapter and performs mutual communication with the cloud server 302 via the wide area communication network 4. The microphone 315 acquires the voice input by the user to the control unit 310. The voice acquired by the microphone 315 is converted into input voice data by a voice control unit (not shown), and voice recognition processing is performed by the voice recognition unit 320. The speaker 316 outputs the output voice data processed by the voice control unit as the voice heard by the user. The display 317 displays various data stored in the storage unit 311 so that the user can see it. For example, the display 317 is composed of a display device such as an LCD (liquid crystal display) or an organic ELD (electroluminescence display). Regarding claim 22, Ueda and Zhang teach the system of claim 18. Ueda further teaches detecting a hand moving through an entrance area of the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); determining whether the hand is carrying the object; determining what direction the hand is moving ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), and based on the hand direction determination and the determination as to whether the hand is carrying the object, updating the live inventory record for the refrigerator ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Regarding claim 23, Ueda and Zhang teach the system of claim 18. Ueda further teaches detecting a presence of the object within scanning range of a scanner coupled with the refrigerator ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); scanning the object using the scanner ([0023] For example, when the sensor 13 is an infrared sensor and it is desired to detect whether or not a hand has passed through the opening of the refrigerating chamber 1c, the sensor 13 is a refrigerating chamber as shown in FIGS. 3 (a) and 3 (b)); obtaining scanning data from the scanner indicating a characteristic of the object ([0030] Specifically, the image acquisition unit 20 specifies an imaging timing at which a still image should be obtained based on the sensor value supplied from the sensor 13); and storing characteristic information about the object based on the scanning data ([0061] (About the image database) FIG. 10 is a diagram showing an example of the data structure of the image database stored in the image storage unit 31). Regarding claim 26, Ueda and Zhang teach the system of claim 18. Zhang teaches wherein the first model set resides on local hardware operatively associated with the refrigerator ([pg. 8] According to the flow of fig. 1, a camera installed in a storage space such as a refrigerator and a locker by a user is used for shooting and moving detection of articles such as food, when articles are stored in or taken out of the storage space) and the second model set resides on a remote cloud based system ([pg. 8] According to the flow of fig. 1, a camera installed in a storage space such as a refrigerator and a locker by a user is used for shooting and moving detection of articles such as food, when articles are stored in or taken out of the storage space, the camera initiates a movement detection signal to a cloud server. [pg. 18] a camera installed on the refrigerator can capture the food picture and send the food picture to a server for visual identification), the first model set sorting the sequence of images to determine if the object went in or out of the refrigerator ([pg. 7]), PNG media_image3.png 278 1145 media_image3.png Greyscale and then the local hardware sending one or more of the sequence of images to the second model set to identify the object ([pg. 10-11] In this example, as shown in fig. 4, the image distribution is that the camera bound to the refrigerator uploads the image data to the video streaming servers through the network, the relationship between the camera and the video streaming servers is N to 1, and each video streaming server receives the data uploaded by N cameras. Meanwhile, the video streaming server forwards the processed image data to the identification server, the relation between the video streaming server and the identification server is 1 to N, the video streaming server is used as an intermediate server, on one hand, the video streaming server is connected with a camera of a terminal, and the image data transmitted by different cameras are buffered and primarily processed. On the one hand, the image data are connected with the identification servers, and the image data are distributed and scheduled to different identification servers according to the processing progress of the identification servers. As shown in fig. 4, the identification server performs data exchange with the storage server according to the processing condition in the determining process, for example, when the food is determined to be taken, the identification server takes the corresponding food from the storage server for comparison, and returns the comparison result to the storage server for storage. When the food is judged to be put, the data including the food type, the refrigerator ID, the food characteristics and the like are sent to the storage server for storage). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Ueda with the teachings of Zhang to use a first model to determine if the object went in or out of the refrigerator and a second model to identify the object because "the recognition mechanism, the video stream server, the recognition server and the storage server are used for carrying out analysis and judgment according to acquired food in-out information, and storing food data information obtained by the analysis and judgment" [Zhang pg. 5]. Claims 7 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Zhang and Ma (US20160217417A1). Regarding claim 7, Ueda and Zhang teach the method of claim 6. Ma, in the same field of endeavor of inventory tracking, teaches wherein the scanner is a near infrared (NIR) scanner ([0067] At step 410, in response to the detection of the object 313, the scanner 505 scans the object 313. The scanner 505 may use any suitable scanning technology. In the illustrated embodiment, for example, the scanner 505 is a near infrared (NIR) scanner). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Ueda with the teachings of Ma to use an NIR scanner because "The reflected light can reveal information regarding the molecular structure of the object 313" [Ma 0067]. Regarding claim 24, Ueda and Zhang teach the system of claim 23. Ma, in the same field of endeavor of inventory tracking, teaches wherein the scanner is a near infrared (NIR) scanner ([0067] At step 410, in response to the detection of the object 313, the scanner 505 scans the object 313. The scanner 505 may use any suitable scanning technology. In the illustrated embodiment, for example, the scanner 505 is a near infrared (NIR) scanner). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Ueda with the teachings of Ma to use an NIR scanner because "The reflected light can reveal information regarding the molecular structure of the object 313" [Ma 0067]. Claims 10, 17, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda in view of Zhang and Guack (US20220067689A1). Regarding claim 10, Ueda and Zhang teach the method of claim 9. Ueda further teaches as a user moves towards the refrigerator, cameras begin capturing the sequence of images ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); and in parallel, a thread is opened for processing to determine if an object is in hand within the sequence of images, and If the object is going in or out of the refrigerator ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), the local hardware sends one or more of the sequence of images to the remote cloud based system for identification of the object, and once identified ([0014] As a result, the refrigerator 1 can be turned into a so-called network home appliance, the refrigerator 1 can receive necessary information from an external device such as a cloud server 2, and the locally stored information can be stored in the refrigerator. [0070] Therefore, the pattern storage unit 30 may be provided in the cloud server 2, and the refrigerator 1 may be configured to transmit the first image and the second image to the cloud server 2 to request a matching result), updating a live inventory record associated with the objects within the refrigerator ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Ueda does not teach depth cameras triggering rgb cameras to begin capturing the sequence of images. Guack, in the same field of endeavor of object tracking, teaches depth cameras triggering rgb cameras to begin capturing the sequence of images ([0175] In some implementations, the one or more sensors 411 (e.g., the camera 605 shown in FIG. 6) are configured to obtain image data frames. For example, the sensors 414 correspond to one or more RGB cameras (e.g., with a complementary metal-oxide-semiconductor (CMOS) image sensor, or a charge-coupled device (CCD) image sensor), infrared (IR) image sensors, depth cameras, monocular cameras, event-based cameras, a microphone, and/or the like. [0361] In some implementations, the sensor unit 1970 is configured to trigger cameras or the device 1901 based on detection of movements). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Ueda with the teachings of Guack to use depth cameras to trigger rgb cameras "to turn the edge system 1801 into a low-power or sleep mode to save power" [Guack 0344]. Regarding claim 17, Ueda and Zhang teach the medium of claim 16. Ueda further teaches as a user moves towards the refrigerator, cameras begin capturing the sequence of images ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); and in parallel, a thread is opened for processing to determine if an object is in hand within the sequence of images, and If the object is going in or out of the refrigerator ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), the local hardware sends one or more of the sequence of images to the remote cloud based system for identification of the object, and once identified ([0014] As a result, the refrigerator 1 can be turned into a so-called network home appliance, the refrigerator 1 can receive necessary information from an external device such as a cloud server 2, and the locally stored information can be stored in the refrigerator. [0070] Therefore, the pattern storage unit 30 may be provided in the cloud server 2, and the refrigerator 1 may be configured to transmit the first image and the second image to the cloud server 2 to request a matching result), updating a live inventory record associated with the objects within the refrigerator ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Ueda does not teach depth cameras triggering rgb cameras to begin capturing the sequence of images. Guack, in the same field of endeavor of object tracking, teaches depth cameras triggering rgb cameras to begin capturing the sequence of images ([0175] In some implementations, the one or more sensors 411 (e.g., the camera 605 shown in FIG. 6) are configured to obtain image data frames. For example, the sensors 414 correspond to one or more RGB cameras (e.g., with a complementary metal-oxide-semiconductor (CMOS) image sensor, or a charge-coupled device (CCD) image sensor), infrared (IR) image sensors, depth cameras, monocular cameras, event-based cameras, a microphone, and/or the like. [0361] In some implementations, the sensor unit 1970 is configured to trigger cameras or the device 1901 based on detection of movements). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Ueda with the teachings of Guack to use depth cameras to trigger rgb cameras "to turn the edge system 1801 into a low-power or sleep mode to save power" [Guack 0344]. Regarding claim 27, Ueda and Zhang teach the system of claim 26. Ueda further teaches as a user moves towards the refrigerator, cameras begin capturing the sequence of images ([0022] The sensor 13 is for detecting whether or not the user's hand (and the item if the hand is holding the item) has passed a predetermined place in the refrigerating chamber 1c. The number and installation position of the sensors 13 are appropriately determined according to the type and performance of the sensors 13 and the location where the passage is to be detected. As the sensor 13, for example, an infrared sensor, a temperature sensor, an open / close sensor (of the door portion 1d), an illuminance sensor, or the like is adopted); and in parallel, a thread is opened for processing to determine if an object is in hand within the sequence of images, and If the object is going in or out of the refrigerator ([0024] The camera 14 is for capturing a still image of a user's hand entering and exiting the refrigerator compartment 1c. [0042] FIG. 6 shows a plurality of scenes seen in the case 2 in which an item (for example, an apple) is delivered by the user, the content of the image pickup control process of the image acquisition unit 20 for each scene, and its trigger. [0073] (Making in / out determination processing by the warehousing / delivery determination unit 22) Based on the image pair (first image and second image) acquired by the image acquisition unit 20 or the feature amount of each image extracted by the feature amount extraction unit 21. Then, the warehousing / delivery determination unit 22 executes the warehousing / delivery determination process. [0074] First, the warehousing / delivery determination unit 22 reads out the image pair or the feature amount of the image pair in the movement of the hand to be processed from the image storage unit 31 (image database of FIG. 10) (S201). [0081] On the other hand, in S203, when the warehousing / delivery determination unit 22 determines that the item is not shown in the first image (NO in S203), the warehousing / delivery determination unit 22 then determines the presence / absence of the item in the second image. Here, when the warehousing / delivery determination unit 22 determines that the item is shown in the second image (YES in S212), the item identification unit 23 identifies the item 2 shown in the second image (S213). Here, the warehousing / delivery determination unit 22 determines that the item 2 has been delivered (S214)), the local hardware sends one or more of the sequence of images to the remote cloud based system for identification of the object, and once identified ([0014] As a result, the refrigerator 1 can be turned into a so-called network home appliance, the refrigerator 1 can receive necessary information from an external device such as a cloud server 2, and the locally stored information can be stored in the refrigerator. [0070] Therefore, the pattern storage unit 30 may be provided in the cloud server 2, and the refrigerator 1 may be configured to transmit the first image and the second image to the cloud server 2 to request a matching result), updating a live inventory record associated with the objects within the refrigerator ([0078] Therefore, the inventory management unit 24 registers a new record of the item 1 in the inventory table stored in the inventory table storage unit 32 so as to correspond to the receipt of the item 1 (S208). When storing the warehousing date and time in the inventory table, the inventory management unit 24 may treat, for example, the imaging date and time of the first image in which the item 1 is captured as the warehousing date and time. At the same time, the inventory management unit 24 deletes the record of the item 2 from the inventory table so as to correspond to the delivery of the item 2 (S209). This completes a series of inventory management for each hand in and out of the processing target). Ueda does not teach depth cameras triggering rgb cameras to begin capturing the sequence of images. Guack, in the same field of endeavor of object tracking, teaches depth cameras triggering rgb cameras to begin capturing the sequence of images ([0175] In some implementations, the one or more sensors 411 (e.g., the camera 605 shown in FIG. 6) are configured to obtain image data frames. For example, the sensors 414 correspond to one or more RGB cameras (e.g., with a complementary metal-oxide-semiconductor (CMOS) image sensor, or a charge-coupled device (CCD) image sensor), infrared (IR) image sensors, depth cameras, monocular cameras, event-based cameras, a microphone, and/or the like. [0361] In some implementations, the sensor unit 1970 is configured to trigger cameras or the device 1901 based on detection of movements). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Ueda with the teachings of Guack to use depth cameras to trigger rgb cameras "to turn the edge system 1801 into a low-power or sleep mode to save power" [Guack 0344]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bronicki (US20220114868A1) teaches tracking and identifying refrigerator products in an image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Jun 02, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Nov 10, 2025
Response after Non-Final Action
Dec 10, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month