Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-69 are present in this application. Claims 1-34 are cancelled. Claims 35-69 are pending in this office action.
This office action is NON-FINAL.
Drawings
The Drawings filed on 08/08/25 are acceptable for examination purposes.
Specification
The Specification filed on 03/14/25 is acceptable for examination purposes.
Information Disclosure Statement
The information disclosure statements (IDS) filed on 03/14/25 has
been considered by the Examiner and made of record in the application file.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created
doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the
unjustified or improper timewise extension of the “right to exclude” granted by a patent
and to prevent possible harassment by multiple assignees. A nonstatutory double
patenting rejection is appropriate where the conflicting claims are not identical, but at
least one examined application claim is not patentably distinct from the reference
Claims 1-22 are present in this application. Claims 1-22 are pending in this office
claim(s) because the examined application claim is either anticipated by, or would have
been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46
USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed.
Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum,
686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619
(CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may
be used to overcome an actual or provisional rejection based on nonstatutory double
patenting provided the reference application or patent either is shown to be commonly
owned with the examined application, or claims an invention made as a result of
activities undertaken within the scope of a joint research agreement. See MPEP §
717.02 for applications subject to examination under the first inventor to file provisions
of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for
applications not subject to examination under the first inventor to file provisions of the
AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used.
Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in
which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26,
PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may
be filled out completely online using web-screens. An eTerminal Disclaimer that meets
all requirements is auto-processed and approved immediately upon submission. For
more information about eTerminal Disclaimers, refer to
www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 35-69 of the instant application is rejected on the ground of nonstatutory
obviousness type double patenting as being unpatentable over claims 35-41 and 43-63 of Patent No. US 12, 277, 531. Although the claims at issue are not identical,
they are not patentably distinct from each other because claims 35-41 and 43-63 of Patent No. US 12, 277, 531 recite the elements of Claims 35-69 of the Instant
application No. 19/079,875.
Both claim features of the instant application No. 19/079,875 and Patent No. US 12, 277, 531 can be compared as follows:
Instant Application 19/079,875
Patent No. US 12, 277, 531
35. (New) A method for classifying food items, including: capturing one or more sensor data relating to a disposal event, wherein the disposal event includes a food item being placed into a waste receptacle by a user, and the disposal event is one of a plurality of disposal events relating to the waste receptacle such that food items from multiple disposal events are disposed of consecutively within the waste receptacle without the waste receptacle being emptied, wherein the one or more sensor data includes image data captured from an image sensor above the waste receptacle and wherein the image data includes image data of the food item placed, during the disposal event, within the waste receptacle including food items from one or more previous disposal events, and wherein weight data is obtained from the one or more sensor data; and classifying the food item using the image data, at least in part, automatically using a model trained on, at least, image sensor data; wherein the image data is processed during the classification to isolate new objects within the image data and wherein the food item is associated with the weight data.
35. (Currently Amended) A method for classifying food items, including:
capturing one or more sensor data relating to a disposal event, wherein the disposal event includes a food item being placed into a waste receptacle by a user, and the disposal event is one of a plurality of disposal events relating to the waste receptacle such that food items from multiple events are disposed of consecutively within the waste receptacle without the waste receptacle being emptied, wherein the one or more sensor data includes image data captured from an image sensor above the waste receptacle and wherein the image data includes image data captured while the food item is being placed within the waste receptacle or after the food item has been placed within the waste receptacle onto food items within one or more previous disposal events, and wherein the one or more sensor data includes weight data: processing the image data to isolate new objects within the image data compared to previously captured image data using segmenter; and classifying the food item using at least the processed image data, at least in part, automatically using a model trained on sensor data, and using the were data.
36. (New) The method as claimed in claim 35, wherein the image data is processed by a method including the step of: identifying new image data between an image of a previous disposal event and an image of the disposal event using a segmenter such that the food item is isolated from food items in previous disposal events.
36. (Currently Amended) The method as claimed in claim 35, wherein the image data is processed by a method including the step of: identifying new image data between an image of a previous disposal event and an image of the disposal event using the segmenter such that the food item is isolated from food items in previous disposal events.
37. (Previously Presented) The method as claimed in claim 36, wherein the image data is processed by a method including the step of: combining the identified new image data with the image of the disposal event to result in a combined data; wherein the food item is classified using at least the combined data.
37. (New) The method as claimed in claim 36, wherein the image data is processed by a method including the step of: combining the identified new image data with the image of the disposal event to result in a combined data; wherein the food item is classified using at least the combined data.
38. (New) The method as claimed in claim 36, wherein the image data is compared to earlier image data captured before the disposal event, and a difference between the two image data is used to classify the food item.
38. (Previously Presented) The method as claimed in claim 36, wherein the image data is compared to earlier image data captured before the disposal event, and a difference between the two image data is used to classify the food item.
39. (New) The method as claimed in claim 35, wherein the image sensor is a visible light camera.
39. (Previously Presented) The method as claimed in claim 36, wherein the image sensor is a visible light camera.
40. (New) The method as claimed in claim 35, further including detecting a waste receptacle within the image data to isolate potential food items.
40. (Previously Presented) The method as claimed in claim 36, further including detecting a waste receptacle within the image data to isolate potential food items.
41. (New) The method as claimed in claim 35, wherein the image data includes a plurality of concurrently captured images of the food item captured over time during the plurality of disposal events and wherein the food item is classified using an image selected from the plurality of images using a good image selection module.
41. | (Currently Amended) The method as claimed in claim 36, wherein the image data includes a plurality of concurrently captured images of the food item captured over time during the
plurality of disposal events and wherein the food item is classified using an image selected from the plurality of images using a good image selection module.
48. (New) The method as claimed in claim 35, wherein the model is a neural network.
43. (Previously Presented) The method as claimed in claim 35, wherein the model is a neural network.
49. (New) The method as claimed in claim 35, wherein the food item is classified, at least in part, by an inference engine using the model.
44. (Previously Presented) The method as claimed in claim 35, wherein the food item is classified, at least in part, by an inference engine using the model.
50. (New) The method as claimed in claim 49, wherein the inference engine also uses historical pattern data to classify the food item.
45. (Previously Presented) The method as claimed in claim 44, wherein the inference engine also uses historical pattern data to classify the food item.
51. (New) The method as claimed in claim 50, wherein the inference engine also uses one or more selected from a set of time, location, and immediate historical data to classify the food item.
46. (Previously Presented) The method as claimed in claim 45, wherein the inference engine also uses one or more selected from the set of time, location, and immediate historical data to classify the food item.
52. (New) The method as claimed in claim 50, wherein the inference engine determines a plurality of possible classifications for the food items.
47. (Previously Presented) The method as claimed in claim 45, wherein the inference engine determines a plurality of possible classifications for the food items.
53. (New) The method as claimed in claim 52, wherein a number of possible classifications is based upon a probability for each possible classification exceeding a threshold.
48. (Previously Presented) The method as claimed in claim 47, wherein a number of possible classifications is based upon a probability for each possible classification exceeding a threshold.
54. (New) The method as claimed in claim 52, wherein the plurality of possible classifications is displayed to a user on a user interface.
49. (Previously Presented) The method as claimed in claim 47, wherein the plurality of possible classifications is displayed to a user on a user interface.
55. (New) The method as claimed in claim 54, wherein an input is received by the user to classify the food item.
50. (Previously Presented) The method as claimed in claim 49, wherein an input is received by the user to classify the food item.
56. (New) The method as claimed in claim 52, wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability.
51. (Previously Presented) The method as claimed in claim 47, wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability.
57. (New) The method as claimed in claim 35, wherein the model is trained by capturing sensor data relating to historical food item events and users classifying the food items during the historical food item events.
52. (Previously Presented) The method as claimed in claim 35, wherein the model is trained by capturing sensor data relating to historical food item events and users classifying the food items during the historical food item events.
58. (New) The method as claimed in claim 57, wherein the sensor data relating to historical food item events are captured and the users classify the food items during the historical food item events at a plurality of local food waste devices.
53. (Previously Presented) The method as claimed in claim 52, wherein the sensor data relating to historical food item events are captured and the users classify the food items during the historical food item events at a plurality of local food waste devices.
59. (New) The method as claimed in claim 35, wherein the sensor data is captured at a local device within a commercial kitchen and the disposal event is a commercial kitchen event.
54. (Previously Presented) The method as claimed in claim 35, wherein the sensor data is captured at a local device within a commercial kitchen and the disposal event is a commercial kitchen event.
60. (New) The method as claimed in claim 59, wherein a dynamic decision is made to classify the food item at the local device or a server.
55. (Previously Presented) The method as claimed in claim 54, wherein a dynamic decision is made to classify the food item at the local device or a server.
61. (New) The method as claimed in claim 35, wherein the weight data is obtained from a weight sensor.
56. (Currently Amended) The method as claimed in claim 35, wherein weight data is captured from a weight sensor.
63. (New) The method as claimed in claim 35, wherein the food item is classified, at least in part, automatically using a plurality of models trained on sensor data.
57. (Previously Presented) The method as claimed in claim 35, wherein the food item is classified, at least in part, automatically using a plurality of models trained on sensor data.
64. (New) The method as claimed in claim 61, wherein a first model of the plurality of models is trained on sensor data from global food item events.
58. (Previously Presented) The method as claimed in claim 57, wherein a first model of the plurality of models is trained on sensor data from global food item events.
65. (New) The method as claimed in claim 64, wherein a second model of the plurality of models is trained on sensor data from local food item events.
59. (Previously Presented) The method as claimed in claim 58, wherein a second model of the plurality of models is trained on sensor data from local food item events.
66. (New) The method as claimed in claim 35, wherein the sensor data includes sensor data captured over, at least part of, a duration of the plurality of disposal events.
60. (Previously Presented) The method as claimed in claim 35, wherein the sensor data includes sensor data captured over, at least part, a duration of the plurality of disposal events.
Claim Rejections 35 U.S.C. §103
8. In the event the determination of the status of the application as subject to AIA 35
U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any
correction of the statutory basis for the rejection will not be considered a new ground of
rejection if the prior art relied upon, and the rationale supporting the rejection, would be
the same under either status.
A patent for a claimed invention may not be obtained, notwithstanding that the
claimed invention is not identically disclosed as set forth in section 102, if the
differences between the claimed invention and the prior art are such that the
claimed invention as a whole would have been obvious before the effective filing
date of the claimed invention to a person having ordinary skill in the art to which
the claimed invention pertains. Patentability shall not be negated by the manner in
which the invention was made.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all
obviousness rejections set forth in this Office action:
Claims 35-48, 57-58 and 60-69 are rejected under 35 U.S.C. 103 as being
unpatentable over Rathore et al. (US 20160078414) in view of Bhogal et al. (US
2017/0176019 A1).
Regarding claim 35, Rathore teaches. a method for classifying food items, (See
Rathore paragraph [0025], The solid waste item type associated with the stored
solid waste item image in the solid waste item database is the composting
disposition method. The solid waste item computing device then classifies the
banana peel that is sought to be disposed of by the user with the solid waste item
type of the composting disposition method), including: capturing one or more sensor data relating to a disposal event, (See Rathore paragraph [0082], a sensor
system 340 that captures data from the solid waste item. Sensor), wherein the disposal event includes a food item, (See Rathore paragraph [0007], a user an appropriate solid waste receptacle for disposal of a solid waste item RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n)). being placed into a waste receptacle by a user, (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n), and the disposal event is one of a plurality of disposal events relating to the waste receptacle, (See Rathore paragraph [0082], a sensor system 340 that captures data from the solid waste item. Sensor), such that food items from multiple disposal events are disposed of consecutively within the waste receptacle without the waste receptacle being emptied, (See Rathore paragraph [0020], properly discard each solid waste item into appropriate solid waste receptacles. Whether individuals intentionally or accidentally discard solid waste items in incorrect solid waste receptacles, businesses and/or disposition facilities suffer the consequences), wherein the one or more sensor data includes image data captured from an image sensor above the waste receptacle, (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n), and wherein the image data includes image data of the food item placed, during the disposal event, (See Rathore paragraph [0085], Sensor system 340 may also capture an RGB image that captures each of the solid waste items), within the waste receptacle including food items from one or more previous disposal events, See Rathore paragraph [0007], The captured image is then compared with stored images in a database to determine in which solid waste receptacle the solid waste item should be discarded).
Rathore does not explicitly disclose wherein weight data is obtained from the one or more sensor data, classifying the food item using the image data, at least in part, automatically using a model trained on, at least, image sensor data; wherein the image data is processed during the classification to isolate new objects within the image data and wherein the food item is associated with the weight data.
However, Bhogal teaches wherein weight data is obtained from the one or more sensor, (See Bhogal, paragraph [0059], the sensor 700 can include one or more force sensors 750 (e.g.,weight sensors), data classifying the food item using the image data, at least in part, automatically using a model trained on, at least, image sensor data, (See Bhogal, paragraph [0129], The foodstuff identifier can be scanned from the food or food packaging in-situ, automatically determined based on the sensor measurements (e.g., from the foodstuff image), wherein the image data is processed during the classification to isolate new objects within the image data, (See Bhogal, paragraph [0119], the image, classifies the foodstuff within the image, and sends the classification or information associated with the classification (e.g., a recipe) back to the oven. However, the foodstuff can be otherwise classified), and wherein the food item is associated with the weight data, (See Bhogal, paragraph [0059], the sensor 700 can include one or more force sensors 750 (e.g.,weight sensors).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein weight data is obtained from the one or more sensor data, classifying the food item using the image data, at least in part, automatically using a model trained on, at least, image sensor data; wherein the image data is processed during the classification to isolate new objects within the image data and wherein the food item is associated with the weight data of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Claim 69 recites the same limitations as claim 35 above. Therefore, claim
69 is rejected based on the same reasoning.
Regarding claim 36, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the image data is processed by a method including the step of: identifying new image data between an image of a previous disposal event, (See Rathore Abstract, The solid waste item is classified into a solid waste item type based on the captured image of the solid waste item), and an image of the disposal event using a segmenter such that the food item is isolated from food items in previous disposal events, (See Rathore paragraph [0007], The captured image is then compared with stored images in a database to determine in which solid waste receptacle the solid waste item should be
discarded).
Regarding claim 37, Rathore taught the method according to claim 36, as
described above. Rathore further teaches wherein the image data is processed by, (A processor is configured to classify the solid waste item into a solid waste item type based on the captured image of the solid waste item), a method including the step of: combining the identified new image data with the image of the disposal event to result in a combined data; (See Rathore paragraph [0015], Solid waste items can include any type of solid, liquid, semi-solid and/or gaseous materials. Conventionally, businesses that generate solid waste that results from the consumption of food), wherein the food item is classified using at least the combined data, (See Rathore Abstract, The solid waste item is classified into a solid waste item type based on the captured image of the solid waste item).
Regarding claim 38, Rathore taught the method according to claim 36, as
described above. Rathore further teaches wherein the image data is compared to earlier image data captured before the disposal event, (See Rathore paragraph [0007], The captured image is then compared with stored images in a database to determine in which solid waste receptacle the solid waste item should be discarded), and a difference between the two image data is used to classify the food item, (See Rathore Abstract, The solid waste item is classified into a solid waste item type based on the captured image of the solid waste item).
Regarding claim 39, Rathore taught the method according to claim 35, as
described above.
Rathore does not explicitly disclose wherein the image sensor is a visible light camera.
However, Bhogal teaches wherein the image sensor is a visible light camera,
(The light can be selected to maximize the aesthetics of the cooking foodstuff.
The emitter can emit white light (e.g., cool white light, warm white light, etc.), light
in the visible spectrum…first and second light emitting element arranged on a
first and second side of the camera).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the image sensor is a visible light camera of Bhogal automatically identify foodstuff within the
cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 40, Rathore taught the method according to claim 35, as
described above. Rathore further teaches further including detecting a waste receptacle within the image data to isolate potential food items, (See Rathore paragraph [0008], a system includes an image detection system that is configured to capture an image of a solid waste item prior to the solid waste item being discarded in a solid waste receptacle)..
Regarding claim 41, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the image data includes a plurality of concurrently captured images, (See Rathore paragraph [0008], A processor is
configured to classify the solid waste item into a solid waste item type based on
the captured image of the solid waste item), of the food item captured over time during the plurality of disposal events and wherein the food item is classified using an image selected from the plurality of images using a good image selection module, (See Rathore paragraph [0034], The user may place each solid waste item in the proximity to image detector module 120 such that image detector module 120 may capture an adequate image of each solid waste item so that each solid waste item may be properly identified from each captured image).
Regarding claim 42, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the image sensor is a video camera and the image data is video data, (See Rathore paragraph [0082], a sensor system 340 that captures data from the solid waste item. Sensor system 340 may include one or more sensors that may be a video imaging system).
Regarding claim 43, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein an enhancement apparatus enhances capture of, (See Rathore paragraph [0040], Image co-registration 215 may enhance fusion of the data captured from the visual images and the FLIR images), at least part of, the one or more sensor data, (See Rathore paragraph [0072], a processor 320 and a sensor system 340).
Regarding claim 44, Rathore taught the method according to claim 43, as
described above. Rathore further teaches wherein the enhancement apparatus, (See Rathore paragraph [0040], Image co-registration 215 may enhance fusion of the data captured from the visual images and the FLIR images).
Rathore does not explicitly disclose includes a light positioned to illuminate the food item.
However, Bhogal teaches includes a light positioned to illuminate the food item, (The light can be selected to maximize the aesthetics of the cooking foodstuff.
The emitter can emit white light (e.g., cool white light, warm white light, etc.), light
in the visible spectrum…first and second light emitting element arranged on a
first and second side of the camera).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify includes a light positioned to illuminate the food item. of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 45, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the classification of the food item, (See Rathore Abstract, The solid waste item is classified into a solid waste item type based on the captured image of the solid waste item).
Rathore does not explicitly disclose includes one or more selected from a set of type of the food item, state of the food item, and reason for disposal of the food item.
However, Bhogal teaches includes one or more selected from a set of type of the food item, state of the food item, and reason for disposal of the food item, (See Bhogal
paragraph [0036], Upon user confirmation of the recommended food class (or
user selection of the food class for the foodstuff 10), the system can reinforce or
recalibrate the foodstuff recognition modules based on the user selection and
data used to initially recognize the foodstuff 10 (e.g., image), See Bhogal
paragraph [0147], generate foodstuff predictions (e.g., finish time, etc.), perform
population-level analyses (e.g., most common food for a given location or
demographic), or otherwise use the oven measurements).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify includes one or more selected from a set of type of the food item, state of the food item, and reason for disposal of the food item of Bhogal automatically identify foodstuff within
the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 46, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the image data is captured after the food item has been placed within the receptacle, (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n).
Regarding claim 47, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the image data is captured while the food item is being placed within the waste receptacle, (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n).
Regarding claim 48, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the model is a neural network, (See Rathore paragraph [0052], Processor 125 may implement a neural network in
customizing the solid waste item database).
Regarding claim 57, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the model is trained by capturing sensor data relating to historical food item events, (See Rathore paragraph [0008], A
processor is configured to classify the solid waste item into a solid waste item
type based on the captured image of the solid waste item), and users classifying the food items during the historical food item events, (See Rathore Abstract, The solid
waste item is classified into a solid waste item type based on the captured image
of the solid waste item).
Regarding claim 58, Rathore taught the method according to claim 57, as
described above. Rathore further teaches wherein the sensor data relating to historical food item events are captured, (See Rathore paragraph [0008], A processor is configured to classify the solid waste item into a solid waste item type based on
the captured image of the solid waste item), and the users classify the food items during the historical food item events at a plurality of local food waste devices, (See Rathore Abstract, The solid waste item is classified into a solid waste item type based on the captured image of the solid waste item).
Regarding claim 60, Rathore taught the method according to claim 59, as
described above. Rathore further teaches wherein a dynamic decision is made to classify the food item at the local device or a server, (See Rathore paragraph [0063],
user makes a decision that the banana peel should be disposed in compost solid
waste receptacle 110n).
Regarding claim 61, Rathore taught the method according to claim 35, as
described above.
Rathore does not explicitly disclose wherein the weight data is obtained from a weight sensor.
However, Bhogal teaches wherein the weight data is obtained from a weight sensor, (the sensor 700 can include one or more force sensors 750 (e.g.,
weight sensors).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the weight data is obtained from a weight sensor of Bhogal automatically identify foodstuff
within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 62, Rathore taught the method according to claim 61, as
described above.
Rathore does not explicitly disclose wherein the weight sensor receives data from a scale underneath the waste receptacle.
However, Bhogal teaches wherein the weight sensor receives data from a scale underneath the waste receptacle, (the sensor 700 can include one or more force sensors 750 (e.g., weight sensors).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the weight sensor receives data from a scale underneath the waste receptacle of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 63, Rathore taught the method according to claim 63, as
described above.
Rathore does not explicitly disclose wherein the food item is classified, at least in part, automatically using a plurality of models trained on sensor data.
However, Bhogal teaches wherein the food item is classified, at least in part, automatically using a plurality of models trained on sensor data, (See Bhogal, paragraph [0129], The foodstuff identifier can be scanned from the food or food packaging in-situ, automatically determined based on the sensor measurements (e.g., from the foodstuff image).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the food item is classified, at least in part, automatically using a plurality of models trained on sensor data. of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 64, Rathore taught the method according to claim 61, as
described above.
Rathore does not explicitly disclose wherein a first model of the plurality of models is trained on sensor data from global food item events.
However, Bhogal teaches wherein a first model of the plurality of models is trained on sensor data, (See Bhogal; paragraph [0054], The processing system 500
of the oven 100 functions to record sensor measurements) from global food item events, (See Bhogal; paragraph [0125], The identified model can subsequently be
used as the global classification model).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein a first model of the plurality of models is trained on sensor data from global food item events.
of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor
measurements, (See Abstract).
Regarding claim 65, Rathore taught the method according to claim 64, as
described above.
Rathore does not explicitly disclose wherein a second model of the plurality of models is trained on sensor data from local food item events.
However, Bhogal teaches wherein a second model of the plurality of models is trained on sensor data from local food item events, (See Bhogal paragraph [0119],
remote computing system, user device, or other computing system can classify
the foodstuff, using an identification module stored locally).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein a second model of the plurality of models is trained on sensor data from local food item events of
Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor
measurements, (See Abstract).
Regarding claim 66, Rathore taught the method according to claim 35, as
described above.
Rathore does not explicitly disclose wherein the sensor data includes sensor data captured over, at least part of, a duration of the plurality of disposal events.
However, Bhogal teaches wherein the sensor data includes sensor data captured over, at least part of, a duration of the plurality of disposal events, (See paragraph [0098], Measurements from different sensors can be concurrently recorded or
asynchronously recorded (e.g., within a predetermined time duration, separated
by a predetermined time duration, etc.).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the sensor data includes sensor data captured over, at least part of, a duration of the plurality of disposal events of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 67, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the food item of at least some of the disposal events is an aggregate food waste item, (See Rathore paragraph [0025], solid waste configurations may include additional combinations of solid waste receptacles).
Regarding claim 68, Rathore teaches a system for classifying food items, (See
Rathore paragraph [0025], The solid waste item type associated with the stored
solid waste item image in the solid waste item database is the composting
disposition method. The solid waste item computing device then classifies the
banana peel that is sought to be disposed of by the user with the solid waste item
type of the composting disposition method), including: one or more sensors configured for capturing one or more sensor data relating to a disposal event, (See Rathore paragraph [0082], a sensor system 340 that captures data from the solid waste item. Sensor), wherein the disposal event includes a food item, (See Rathore paragraph [0007], a user an appropriate solid waste receptacle for disposal of a solid waste item RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n)), being placed into a waste receptacle by a user, , (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n),, and the disposal event is one of a plurality of disposal events relating to the waste receptacle, (See Rathore paragraph [0082], a sensor system 340 that captures data from the solid waste item. Sensor), such that food items from multiple disposal events are disposed of consecutively within the waste receptacle without the waste receptacle being emptied, (See Rathore paragraph [0020], properly discard each solid waste item into appropriate solid waste receptacles. Whether individuals intentionally or accidentally discard solid waste items in incorrect solid waste receptacles, businesses and/or disposition facilities suffer the consequences), wherein the one or more sensor data includes image data captured from an image sensor above the waste receptacle, (See Rathore paragraph [0085], system 340 may then map the distance of each pixel onto the RGB image to generate the RGBD image of each solid waste item…waste item in the appropriate solid waste receptacle 110a through 110n), and wherein the image data includes image data of the food item placed, during the disposal event, (See Rathore paragraph [0085], Sensor system 340 may also capture an RGB image that captures each of the solid waste items), within the waste receptacle including food items from one or more previous disposal events, (See Rathore paragraph [0085], Sensor system 340 may also capture an RGB image that captures each of the solid waste items).
Rathore does not explicitly disclose wherein weight data is obtained from the one or more sensor data; and at least one processor configured for classifying the food item using the image data, at least in part, using a model trained on, at least, image sensor data; and wherein the image data is processed during the classification to isolate new objects within the image data and wherein the food item is associated with the weight data.
However, Bhogal teaches wherein weight data is obtained from the one or more sensor data, (See Bhogal, paragraph [0059], the sensor 700 can include one or more force sensors 750 (e.g.,weight sensors), and at least one processor configured for, (A processor is configured to classify the solid waste item into a solid waste item type based on the captured image of the solid waste item), classifying the food item using the image data, at least in part, using a model trained on, at least, image sensor data, (See Bhogal, paragraph [0129], The foodstuff identifier can be scanned from the food or food packaging in-situ, automatically determined based on the sensor measurements (e.g., from the foodstuff image), and wherein the image data is processed during the classification to isolate new objects within the image data, (See Bhogal, paragraph [0119], the image, classifies the foodstuff within the image, and sends the classification or information associated with the classification (e.g., a recipe) back to the oven. However, the foodstuff can be otherwise classified), and wherein the food item is associated with the weight data, (See Bhogal, paragraph [0059], the sensor 700 can include one or more force sensors 750 (e.g.,weight sensors).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein weight data is obtained from the one or more sensor data; and at least one processor configured for classifying the food item using the image data, at least in part, using a model trained on, at least, image sensor data; and wherein the image data is processed during the classification to isolate new objects within the image data and wherein the food item is associated with the weight data of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Claims 49-56 are rejected under 35 U.S.C. 103 as being unpatentable
over Rathore et al. (US 20160078414) in view of Bhogal et al. (US 2017/0176019 A1)
and further in view of Duncan et al. (US 2017/0220943 A1).
Regarding claim 49, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the food item is classified, (See
Rathore paragraph [0025], The solid waste item type associated with the stored
solid waste item image in the solid waste item database is the composting
disposition method. The solid waste item computing device then classifies the
banana peel that is sought to be disposed of by the user with the solid waste item
type of the composting disposition method).
Rathore together with Bhogal does not explicitly disclose at least in part, by an inference engine using the model.
However, Duncan teaches at least in part, by an inference engine using the model, (See Duncan paragraph [0130], the inference engine 18 associates a
prediction model with the analysis goal).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify at least in part, by an inference engine using the model of Duncan in order to identify and find relationships in
data, so that the information obtained is more meaningful for their applications, (See
Duncan paragraph [0005]).
Regarding claim 50, Rathore taught the method according to claim 49, as
described above. Rathore further teaches also uses historical pattern data to classify the food item, (See Rathore paragraph [0085], Sensor system 340 may also project an infrared laser pattern onto each solid waste item when several solid waste items are located within the space determined by processor 320).
Rathore together with Bhogal does not explicitly disclose wherein the inference engine.
However, Duncan teaches wherein the inference engine, (See Duncan
paragraph [0130], the inference engine 18 associates a prediction model with the
analysis goal).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the inference engine of Duncan in order to identify and find relationships in data, so that the
information obtained is more meaningful for their applications, (See Duncan paragraph
[0005]).
Regarding claim 51, Rathore taught the method according to claim 50, as
described above.
Rathore does not explicitly disclose wherein the inference engine also uses one or more selected from a set of time, location, and immediate historical data to classify the food item.
However, Bhogal teaches also uses wherein the inference engine also uses one or more selected from a set of time, location, and immediate historical data to classify the food item, (See Bhogal paragraph [0036], Upon user confirmation of the recommended food class (or user selection of the food class for the foodstuff 10), the system can reinforce or recalibrate the foodstuff recognition modules based on the user selection and data used to initially recognize the foodstuff 10 (e.g., image), See Bhogal paragraph [0147], generate foodstuff predictions (e.g., finish time, etc.), perform population-level analyses (e.g., most common food for a given location or demographic), or otherwise use the oven measurements).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the inference engine also uses one or more selected from a set of time, location, and immediate historical data to classify the food item of Bhogal automatically identify foodstuff within
the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 52, Rathore taught the method according to claim 50, as
described above.
Rathore does not explicitly disclose wherein the inference engine determines a plurality of possible classifications for the food items.
However, Bhogal teaches wherein the inference engine determines a plurality of possible classifications for the food items, (See Rathore paragraph [0025], The solid waste item type associated with the stored solid waste item image in the solid waste item database is the composting disposition method. The solid waste item computing device then classifies the banana peel that is sought to be disposed of by the user with the solid waste item type of the composting disposition method).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the inference engine determines a plurality of possible classifications for the food items. of Bhogal automatically identify foodstuff within the cooking cavity, based on the sensor measurements, (See Abstract).
Regarding claim 53, Rathore taught the method according to claim 52, as
described above.
Rathore does not explicitly disclose wherein a number of possible classifications is based upon a probability for each possible classification exceeding a threshold.
However, Bhogal teaches wherein a number of possible classifications is based upon a probability for each possible classification exceeding a threshold, (See
paragraph [0124], The identification modules can be updated in response to the
user classification differing from the foodstuff parameters identified by the
identification modules (e.g., foodstuff class, foodstuff feature value, etc.) beyond
a threshold difference).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein a number of possible classifications is based upon a probability for each possible classification exceeding a threshold of Bhogal automatically identify foodstuff within the cooking
cavity, based on the sensor measurements, (See Abstract).
Regarding claim 54, Rathore taught the method according to claim 52, as
described above. Rathore further teaches wherein the plurality of possible classifications, (See Rathore paragraph [0008], A processor to classify the solid waste item into a solid waste item type based on the captured image of the solid waste item), is configured is displayed to a user on a user interface, (See Rathore paragraph [0029], a processor, memory, and/or graphical user display)..
Regarding claim 55, Rathore taught the method according to claim 54, as
described above. Rathore further teaches wherein an input is received by the user to classify the food item, (See Rathore paragraph [0081], Processor 320 may
determine the appropriate solid waste receptacle 110a through 110n to discard
the first solid waste item based on the instructions received from processor 125
that previously classified the first solid waste item).
Regarding claim 56, Rathore taught the method according to claim 52, as
described above. Rathore further teaches wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability.
Rathore does not explicitly disclose wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability. However, Bhogal teaches wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability, (See paragraph [0119], The remote computing system is preferably the source of truth (e.g., wherein the classification system stored by the remote computing system has higher priority than other copies)…When the foodstuff is classified by a non-oven system, the non-oven system preferably receives the image, classifies the foodstuff within the image).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the inference engine classifies the food item in accordance with the possible classification with the highest probability of Bhogal automatically identify foodstuff within the cooking cavity,
based on the sensor measurements, (See Abstract).
Claim 59 is rejected under 35 U.S.C. 103 as being unpatentable over Rathore et al. (US 20160078414) in view of Bhogal et al. (US 2017/0176019 A1) and further in view of Olson et al. (US 2021/0030199 A1).
Regarding claim 59, Rathore taught the method according to claim 35, as
described above. Rathore further teaches wherein the sensor data is captured at a local device within a commercial kitchen and the disposal event is a commercial kitchen event.
Rathore together with Bhogal does not explicitly disclose wherein the sensor data
is captured at a local device, within a commercial kitchen and the food item event is a
commercial kitchen event.
However, Olson teaches wherein the sensor data is captured at a local device,
(See Olson paragraph [0015], 3D sensors for visual recognition, computers for
processing information, a laser projection system, and instructions projected by
the laser projection system onto food items or kitchen equipment such as a
commercial grill), within a commercial kitchen and the food item event is a commercial
kitchen event, (See Olson paragraph [0010], a food preparation system for
preparing food items in a commercial kitchen environment includes at least one
camera aimed at the kitchen workspace and a processor in communication with
the camera).
It would have been obvious to one with ordinary skill in the art before the
effective filing date of the claimed invention was made, to modify wherein the sensor data is captured at a local device, within a commercial kitchen and the food item event is a commercial kitchen event of Olson and to maintain a current state of the system
including the status of food items for various types of information, (See Olson paragraph
[0016]).
Conclusions/Points of Contacts
The prior art made of record and not relied upon is considered pertinent to
applicant’s disclosure. See form PTO-892.
Rabinovich et al. (US 2015/0170001 A1), systems and methods for training a
model to classify food items shown in images. In general, the system can select training
examples for each of a number of classes according to a measure of importance for
each class. The system can then train a model using the selected training examples.
The trained model can be used by a system that provides nutritional information or
recipe search results, for example, in response to images of food.
Shakman et al. (US 2006/0015289 A1) the food waste monitoring system 100
includes a food waste monitoring device 102 in communication with a host device 104
that may be located remotely from the food waste monitoring device 102. The
monitoring device 102 and host device 104 may be in electronic communication with
each other through a communications network 106.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MULUEMEBET GURMU whose telephone number is (571)270-7095. The examiner can normally be reached M-F 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at 5712724078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MULUEMEBET GURMU/Primary Examiner, Art Unit 2163