DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary AMENDMENT
The Examiner acknowledges the preliminary amendments filed on 03/07/2024 and accordingly claims 46 – 69 have been examined.
Claim Objections
Claims 57 and 60 are objected to because of the following informalities:
Claim 57 has a missing period at the end of the claim limitation.
Claim 60 has a missing “and” at the end of the second limitation.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 46 – 51, 54 – 59 and 62 – 69 are rejected under 35 U.S.C. 103 as being unpatentable over Milne et al. (US 20230196096 A1; hereafter referred to as Milne) in view of Edwards et al. (US 20200231426 A1; hereafter referred to as Edwards).
Regarding Claim 46, Milne teaches:
A system for detecting an anomaly in a filling process in which a container is filled with a fluid using a filling component (Milne, [0074] “the deep learning techniques described herein (e.g., neural networks for image classification and/or object detection) may be used to detect virtually any type of defects associated with the containers themselves, with the contents (e.g., liquid or lyophilized drug products) of the containers, and/or with the interaction between the containers and their contents (e.g., leaks, etc.)”; Milne, [0077] Non-limiting examples of defects associated with container contents (e.g., contents of syringes, cartridges, vials, or other container types) may include: … a high or low fill level; etc”; Milne, [0096] “system 100 (e.g., module 116) processes each cropped image using anomaly detection”), the system comprising:
a filling machine configured to perform the filling process in which the container is filled with the fluid using the filling component (Milne, [0070] “Referring first to FIG. 6A, an example syringe 600 includes a hollow barrel 602, a flange 604, a plunger 606 that provides a movable fluid seal within the interior of barrel 602, and a needle shield 608 to cover the syringe needle (not shown in FIG. 6A). Barrel 602 and flange 604 may be formed of glass and/or plastic and plunger 606 may be formed of rubber and/or plastic, for example. The needle shield 608 is separated by a shoulder 610 of syringe 600 by a gap 612. Syringe 600 contains a liquid (e.g., drug product) 614 within barrel 602 and above plunger 606”);
an image capturing device configured to capture a video of the container while the container is being filled with the fluid (Milne, [0040] “System 100 includes a visual inspection system 102 communicatively coupled to a computer system 104. Visual inspection system 102 includes hardware (e.g., a conveyance mechanism, light source(s), camera(s), etc.), as well as firmware and/or software, that is configured to capture digital images of a sample (e.g., a container holding a fluid or lyophilized substance)”; Milne, [0048] “VIS control module 120 may cause a given camera to capture a container image by sending a command or other electronic signal (e.g., generating a pulse on a control line, etc.) to that camera. Visual inspection system 102 may send the captured container images to computer system 104, which may store the images in memory unit 114 for local processing (e.g., by module 132 or module 134”); and
one or more processors configured to execute programming instructions (Milne, [0043] “Processing unit 110 includes one or more processors, each of which may be a programmable microprocessor that executes software instructions stored in memory unit 114 to execute some or all of the functions of computer system 104”) to:
receive from the image capturing device the captured video of the container while the container is being filled with the fluid through the filling component (Milne, [0050] “ the computer system 104 stores the container images collected by visual inspection system 102 (possibly after processing by image pre-processing module 132)”; Milne, [0054] The different light sources 206, 208 and 210 may be used to collect images for detecting defects in different categories.”; Milne, [0100] “1000, at an initial stage 1002, computer system 104 collects (e.g., requests and receives) images from a training library (e.g., image library 140) for analysis”);
generate an alert indicative of the detected anomaly (Milne, [0144] “the results of the comparison are analyzed to generate one or more metrics (e.g., by module 136). The metric(s) may include a binary indicator …. The metric(s) may be displayed to a user (e.g., by module 136 generating a value for display), and/or passed to another software module (e.g., by module 136 generating and transferring to another module data that is used in conjunction with other information to indicate to a user whether the neural network is sufficiently trained), for example”).
While Milne teaches analyzing the images to detect the anomaly (Milne, [0046] “the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.), detect objects in images (e.g., detect the position of foreign objects that are not bubbles within container images), or some combination thereof (e.g., one neural network classifying images, and another performing object detection)”; Milne, [0091] “image pre-processing module 132 dynamically localizes regions of interest for defect classes associated with container features having variable positions (e.g., any of features 902 through 908)”; Milne, [0090] “cartridge features that can vary may include features similar to features 902, 904 (for the cartridge piston), 906, and/or 908 (for the gap between a luer lock cap and a shoulder of the cartridge), and vial features that can vary may include features similar to features 902 (for liquid or cake fill level) and/or 906 (for the body width), etc.”), it fails to explicitly teach:
wherein the anomaly comprises one or more of a filling component submersion anomaly, a splash anomaly comprising a splash of fluid within the container, a drip anomaly comprising a drip of fluid within the container, or an unsafe filling component-fluid distance anomaly in which the filling component comes within an unsafe distance of a surface of the fluid in the container;
In the same field of endeavor, Edwards teaches:
analyze an image of the video to detect the anomaly, wherein the anomaly comprises one or more of a filling component submersion anomaly, a splash anomaly comprising a splash of fluid within the container, a drip anomaly comprising a drip of fluid within the container, or an unsafe filling component-fluid distance anomaly in which the filling component comes within an unsafe distance of a surface of the fluid in the container (Edwards, [0042] “the method includes positioning a nozzle above a container; dispensing beer from the nozzle into the container; generating at least one image of the dispensed beer using at least one imaging device; determining a measured foam height from the at least one image of the dispensed beer; determining a liquid upper surface from the at least one image of the dispensed beer; and adjusting a distance between the nozzle and the liquid upper surface as the beer is dispensed to control the measured foam height”; Edwards, [0060] “The processor is configured to determine a measured beverage characteristic and a liquid upper surface from the at least one image of the dispensed beverage; and adjust a distance between the nozzle and the liquid upper surface as the beverage is dispensed to control the measured beverage characteristic”; Examiners Note: since the claim recite “one or more”, the examiner has done mapping of one of the anomaly);
Milne and Edwards are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Milne with the invention of Edwards to make the invention that analyzes the images to detect the anomaly wherein the anomaly comprises an unsafe filling component-fluid distance anomaly in which the filling component comes within an unsafe distance of a surface of the fluid in the container; doing so can improve tracking and estimating the surface level of the dispensed fluid, prevent losses from spillage, overpours and pilferage, and address the challenges of quality and time (Edwards, [004] – [0005], [0160]); thus, one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 47, Milne in view of Edwards teaches the system of claim 46, wherein the one or more processors are further configured to tune the filling machine by altering, based on the detected anomaly, one or more parameters of the filling process that the filling machine is configured to perform (Edwards, [0103] “The combination of the ultrasound sensor 144 and infrared sensor 142 may be used to prevent overflow conditions when excessive foam is produced. The combination of the ultrasound sensor 144 and infrared sensor 142 may be used to more accurately provide a measurement of the upper fluid surface”; Edwards, [0104] “The at least one processor may analyze an image produced by the at least one imaging device. The at least one processor may receive range-measurements from the at least one imaging device… the computing device analyzes the at least one image produced by the at least one imaging device and generates an output containing measurements. The measurements may be used by the at least one processor to control the elevation system 156. The measurements may be used by the at least one processor to control the fluid valve”; Edwards, [0105] The MCU 174 may be used to receive the image produced by the at least one imaging device. The MCU 174 may determine a liquid upper surface 204 of the dispensed beer 200. The liquid upper surface 204 may be determined from the at least one image of the dispensed beer 200. The MCU 174 may determine a measured foam height 214 from the at least one image of the dispensed beer 200. The MCU 174 may adjust a distance 206 between the nozzle 128 and the liquid upper surface 204).
Regarding Claim 48, Milne in view of Edwards teaches the system of claim 46, wherein detecting the anomaly comprises:
identifying the filling component in the image (Milne, [0062] “FIG. 5 depicts an example container image 500 generated by a line scan camera…the container is a syringe, and the 2D image generated from the line scans (e.g., by module 132) depicts a fluid meniscus 502, a plunger 504, and a defect 506”; Milne, [0160] “FIGS. 20 through 23, “container” may refer to a syringe, cartridge, vial, or any other type of container, and may refer to a container holding contents (e.g., a pharmaceutical fluid or lyophilized product)”);
identifying the fluid in the container in the image (Milne, [0040] “digital images of a sample (e.g., a container holding a fluid or lyophilized substance)”); and
detecting the anomaly based on a relative position between the filling component and the fluid (Milne, [0046] “the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.), detect objects in images (e.g., detect the position of foreign objects that are not bubbles within container images), or some combination thereof (e.g., one neural network classifying images, and another performing object detection)”; Milne, [0091] “image pre-processing module 132 dynamically localizes regions of interest for defect classes associated with container features having variable positions (e.g., any of features 902 through 908)”; Milne, [0090] “cartridge features that can vary may include features similar to features 902, 904 (for the cartridge piston), 906, and/or 908 (for the gap between a luer lock cap and a shoulder of the cartridge), and vial features that can vary may include features similar to features 902 (for liquid or cake fill level) and/or 906 (for the body width)”; Milne, [0135] “heatmap 1600 indicates that the neural network instead made its inference based primarily on the portion of image 1600 depicting the fluid meniscus within the syringe barrel. If (1) module 116 presents heatmap 1600 (overlaid on the container image) to a user, along with an indication that a particular neural network classified the container image as a “defect””).
Regarding Claim 49, Milne in view of Edwards teaches the system of claim 46, wherein:
identifying the filling component comprises determining a first bounding box of the filling component (Milne, [0046] “perform segmentation of the container image or image portion (e.g., pixel-by-pixel classification), or techniques that identify objects and place bounding boxes (or other boundary shapes) around those objects. In some embodiments, memory unit 114 also includes one or more other model types, such as a model for anomaly detection”; Milne, [0051] “neural network evaluation module 136 generally evaluates heatmaps if an AVI neural network performs image classification, and instead evaluates other data indicating particular areas within container images (e.g., bounding boxes or pixel-wise labeled areas) if an AVI neural network performs object detection”); and
identifying the fluid comprises determining a second bounding box of the fluid (Milne, Fig. 19B, [0148] “perform segmentation of the container image or image portion (e.g., pixel-by-pixel classification), or techniques that identify objects and place bounding boxes (or other boundary shapes) around those objects”; Milne, [0149] “of bounding boxes (or other indications of image areas) may be performed manually by a user of a tool that presents both the model-generated area (e.g., model-generated bounding box) and the user-generated area (e.g., user-drawn bounding box) on a display, with both areas overlaid on the container image”; Milne, [0154] “As seen in FIG. 19A, the neural network outputs a number of bounding boxes 1910 of different sizes, corresponding to detected objects of different sizes”;).
Regarding Claim 50, Milne in view of Edwards teaches the system of claim 49, wherein detecting the anomaly comprises determining that the first bounding box of the filling component overlaps with the second bounding box of the fluid (Milne, [0085] “defect classes may overlap to some extent. For instance, both image portion 812 and image portion 814 may also be associated with foreign particles within the container”; Milne, [0149] “the process by determining whether any particular areas indicated by data generated by a neural network (e.g., during validation or qualification) correspond to manually-indicated areas for a given container image. With bounding boxes, for example, this may be performed by computing the percentage overlap of the bounding box generated by the neural network with the user-generated (manual label) bounding box (or vice versa), for instance, and comparing the percentage to a threshold”).
Regarding Claim 51, Milne in view of Edwards teaches the system of claim 50, wherein the one or more processors are further configured to classify the detected anomaly as the filling component submersion anomaly when a width of the second bounding box of the fluid is larger than a width of the first bounding box of the filling component by at least one of a threshold width ratio and a threshold width difference (Milne, [0089] “a portion of an image of a cartridge may need to depict a substantial length of the cartridge barrel in order to ensure that the piston is shown. As another example, all of image portions 810 through 816 may need to depict a substantial extra width outside of the barrel diameter in order to ensure that the entire width of the syringe is captured (e.g., due to tolerances in barrel diameter, and/or physical misalignment of the syringe)”; Milne, [0091] “image pre-processing module 132 dynamically localizes regions of interest for defect classes associated with container features having variable positions (e.g., any of features 902 through 908)… In FIG. 9B, image pre-processing module 132 applies an automated, dynamic cropping technique 920 to a container image 922 (e.g., an image captured by visual inspection system 102)”; Milne, [0096] “system 100 (e.g., module 116) processes each cropped image using anomaly detection”).
Regarding Claim 54, Milne in view of Edwards teaches the system of claim 46, wherein detecting the anomaly comprises:
identifying the filling component in the image (Milne, [0052] “Camera 202 captures one or more images of a container 214 (e.g., a syringe, vial, cartridge, or any other suitable type of container) while container 214 is held by agitation mechanism 212”);
determining an area in the image proximate to the filling component (Milne, [0051] “FIG. 8 depicts an automated cropping technique 800 that can be applied to a container image 802. In FIG. 8, an image portion 810 represents an overall area of interest within container image 802, with the areas outside of image portion 810 being irrelevant to the detection of any defect categories”; Milne, [0077] “Non-limiting examples of defects associated with container contents (e.g., contents of syringes, cartridges, vials, or other container types) may include: a foreign particle suspended within liquid contents… a high or low fill level; etc”; see Milne, [0084]);
determining a fluid property in the image (Mile, [0062] “the container is a syringe, and the 2D image generated from the line scans (e.g., by module 132) depicts a fluid meniscus 502, a plunger 504, and a defect 506”; Milne, [0129] “if “good” and “defect” training and validation images on average exhibit a slight difference in the position/height of the meniscus, the neural network can be trained to discriminate good/defective images based on meniscus level”); and
detecting the splash anomaly in the area based on the fluid property (Edwards, [0040] “the processor may be further configured to calculate a volume of the dispensed beverage using the at least one image generated by the at least one imaging device”; Edwards, [0088] “the dispensers described herein may dispense any liquid and use imaging to measure the dispensed volume and/or height of the liquid. The liquid may be but is not limited to any chemical, oil, gas, or pharmaceutical liquid”; Edwards, [0137] “The at least one image generated by the at least one imaging device may be analyzed to calculate the volume of the dispensed beer. The at least one image may depict the liquid upper surface 204 and the shape and size of the container 130. The at least one processor may take measurements of the liquid upper surface 204 and the shape and size of the container 130. These measurements may be used by the processor (MCU 174 and/or the computing device) to calculate the volume of the dispensed beer 200”).
Regarding Claim 55, Milne in view of Edwards teaches the system of claim 54, wherein the one or more processors are further configured to determine that the filling component has been completely inserted into the container before determining the area (Milne, [0060] “syringes can be directly pulled from, and reinserted into, syringe tubs with no human handling (e.g., in an enclosure that is cleaner than the surrounding laboratory atmosphere)”; Milne, [0090] “FIG. 9A depicts example features that may vary in position between containers or container lots, for the specific case of a syringe 900. As seen in FIG. 9, features of syringe 900 that may vary in position include a meniscus position 902 (e.g., based on fill levels)”; Milne, [0105] “, library expansion module 134 extracts or copies the portion of the syringe image that depicts the plunger (and possibly the barrel walls above and below the plunger). Next, at stage 1108, library expansion module 134 inserts the plunger (and possibly barrel walls) at a new position along the length of the syringe”).
Regarding Claim 56, Milne in view of Edwards teaches the system of claim 55, wherein determining that the filling component has been completely inserted into the container comprises:
determining a filling component width of the filling component in the image and respective filling component widths of the filling component in one or more additional images preceding the image in the video (Edwards, [0147] “the measured foam height 214 is too low, the MCU 174 may control the elevation system 156 to increase the distance 206 between the nozzle 128 and the liquid upper surface 204. As the beer 200 is dispensed, the at least one imaging device may take a plurality of images. Each time the processor (e.g. MCU 174 and/or the computing device) receives an image from the at least one imaging device, the measured foam height 214 may be reviewed to determine if a change in the distance 206 is needed”);
determining a variation of filling component width based on the filling component width in the image and the respective filling component widths in the one or more additional images preceding the image (Edwards, [0147] “As the beer 200 is dispensed, the at least one imaging device may take a plurality of images. Each time the processor (e.g. MCU 174 and/or the computing device) receives an image from the at least one imaging device, the measured foam height 214 may be reviewed to determine if a change in the distance 206 is needed”); and
determining that the variation of filling component width is below a threshold filling component width (Edwards, [0147] “This process is repeated until the desired volume of dispensed beer 200 is reached, the foam top surface 212 reaches a threshold distance from the container rim 131, or the liquid upper surface 204 reaches a threshold distance from the container rim 131”).
Regarding Claim 57, Milne in view of Edwards teaches the system of claim 55, wherein the one or more processors are further configured to determine that the filling process has started before determining that the filling component has been completely inserted into the container, wherein determining that the filling process has started comprises:
determining whether fluid is detected in the image (Milne, [0040] “capture digital images of a sample (e.g., a container holding a fluid or lyophilized substance)”; Milne, [0062] “FIG. 5 depicts an example container image 500 generated by a line scan camera. In this example, the container is a syringe, and the 2D image generated from the line scans (e.g., by module 132) depicts a fluid meniscus 502”); and
if no fluid is detected, determining that the filling process has not started; otherwise, determining that the filling has started (Edwards, [0146] FIG. 10 shows the beer dispenser 100 in its starting position, before beer has been dispensed. For example, as shown in FIG. 10, the ultrasound sensor 144 emits an ultrasound-sensing field 145. The ultrasound-sensing field 145 generates an image. The processor receives the image from the ultrasound sensor 144. The processor may determine the position of the container bottom 133. The processor may determine the distance 202 between the nozzle 128 and the container bottom 133. As described above, when the distance 202 between the nozzle 128 and the container bottom 133 reaches a starting threshold value, the elevation system 156 stops its motion upwards. After the starting threshold value is reached, the elevation system 156 may begin to lower the support 126 and the nozzle 128 may begin dispensing beer 200”).
Regarding Claim 58, Milne in view of Edwards teaches the system of claim 54, wherein the area comprises:
a first area on a first side of the filling component (Mile, Fig. 8, [0084] “In FIG. 8, an image portion 810 represents an overall area of interest within container image 802, with the areas outside of image portion 810 being irrelevant to the detection of any defect categories. As seen in FIG. 8, image portion 810 may be somewhat larger than the container (here, a syringe) in order to capture the region of interest even when the container is slightly misaligned. Image pre-processing module 132 may initially crop all container images from visual inspection system 102 down to the image portion 810”); and
a second area on a second side of the filling component opposite the first side (Milne, Fig. 8, [0085] “module 132 reduces image sizes by cropping image 802 (or 810) down to various smaller image portions 812, 814, 816 that are associated with specific defect classes. These include an image portion 812 for detecting a missing needle shield, an image portion 814 for detecting syringe barrel defects, and an image portion 816 for detecting plunger defects”).
Regarding Claim 59, Milne in view of Edwards teaches the system of claim 54, wherein the fluid property comprises a slope in a surface of the fluid, and wherein detecting the splash anomaly in the area comprises:
determining whether the slope exceeds a threshold (Edwards, [0056] “calculating a distance between the container rim and the liquid upper surface; and ceasing the dispensing of beer when the distance between the container rim and the liquid upper surface reaches a threshold”); and
if it is determined that the slope exceeds the threshold, detecting the splash anomaly in the area (Edwards, [0094] “the system may complete the dispensing of beer when the liquid level or foam level reaches the glass rim 131, or reaches a predetermined threshold distance from the glass rim 131”; Edwards, [0134] “By tracking the position of the container rim 131, the measured foam height 214 and/or liquid upper surface 204, the beer dispenser 100 may prevent spillage of beer and minimize waste”).
Regarding Claim 62, Milne in view of Edwards teaches the system of claim 46, wherein the one or more processors are further configured to:
determining the filling process is nearing completion if a variation of height of the fluid in a number of consecutive images including the image of the video is below a threshold (Edwards, [0055] “the processor may be further configured to determine a position of a container rim from the at least one image; calculate a distance between the container rim and the liquid upper surface; and cease dispensing the beer when the distance between the container rim and the liquid upper surface reaches a threshold”; Edwards, [0094] “, the system may complete the dispensing of beer when the liquid level or foam level reaches the glass rim 131, or reaches a predetermined threshold distance from the glass rim 131”; Edwards, [0182] “the position of the container rim 131 is determined from the at least one image. The distance 218 between the container rim 131 and the liquid upper surface 204 is calculated. When the distance 218 between the container rim 131 and the liquid upper surface reaches a threshold, the dispensing of beer 200 is ceased”), wherein:
the image corresponds to a point in time when the filling process is nearing completion (Edwards, [0060] “at least one imaging device for generating at least one image of the dispensed beverage”; Edwards, [0088] “the dispensers described herein may dispense any liquid and use imaging to measure the dispensed volume and/or height of the liquid”; Edwards, [0094] “the system may complete the dispensing of beer when the liquid level or foam level reaches the glass rim 131, or reaches a predetermined threshold distance from the glass rim 131”); and
detecting the anomaly comprises:
identifying the filling component in the image (Milne, [0052] “Camera 202 captures one or more images of a container 214 (e.g., a syringe, vial, cartridge, or any other suitable type of container) while container 214 is held by agitation mechanism 212”);
identifying the fluid in the container in the image (Milne, [0040] “digital images of a sample (e.g., a container holding a fluid or lyophilized substance)”);
determining a distance between the filling component and the fluid (Edwards, [0060] “The processor is configured to determine a measured beverage characteristic and a liquid upper surface from the at least one image of the dispensed beverage; and adjust a distance between the nozzle and the liquid upper surface as the beverage is dispensed to control the measured beverage characteristic”); and
detecting the unsafe filling component-fluid distance anomaly at the point in time when the filling process is nearing completion if it is determined that a distance between the filling component and the fluid is below a threshold distance (Edwards, [0094] “the system may complete the dispensing of beer when the liquid level or foam level reaches the glass rim 131, or reaches a predetermined threshold distance from the glass rim 131”; Edwards, [0103] “The combination of the ultrasound sensor 144 and infrared sensor 142 tends to improve the detection of the liquid and/or foam levels. The combination of the ultrasound sensor 144 and infrared sensor 142 may be used to prevent overflow conditions when excessive foam is produced”; Edwards, [0134] “By tracking the position of the container rim 131, the measured foam height 214 and/or liquid upper surface 204, the beer dispenser 100 may prevent spillage of beer and minimize waste”).
Regarding Claim 63, Milne in view of Edwards teaches the system of claim 46, wherein determining the anomaly comprises detecting the drip anomaly in the image by:
providing the image to a trained neural network model (Milne, [0047] “module 116 is used only to train and validate the AVI neural network(s), and the trained neural network(s) is/are then transported to another computer system for qualification and inspection during commercial production (e.g., using another module similar to module 116)”; Milne, [0148] “the generation of image library 140 may be simplified relative to image classification, and/or the trained neural network(s) may be more accurate than image classification neural networks, particularly for small defects”); and
using the trained neural network model to determine one of a plurality of classifications, wherein at least one of the plurality of classifications is indicative of the drip anomaly in the image (Milne, [0046] “the AVI neural network(s) trained and/or run by module 116 may classify entire images (e.g., defect vs. no defect, or presence or absence of a particular type of defect, etc.), detect objects in image”; Milne, [0074] “of defects associated with the interaction between syringes and the syringe contents may include a leak of liquid through the plunger, liquid in the ribs of the plunger, a leak of liquid from the needle shield, and so on”; Milne, [0077] Non-limiting examples of defects associated with container contents (e.g., contents of syringes, cartridges, vials, or other container types) may include: … a high or low fill level”; Milne, [0078] “AVI neural networks perform image classification to detect defects in different categories (e.g., by classifying defects in a given category as “present” or “absent”)”).
Regarding Claim 64, Milne in view of Edwards teaches the system of claim 63, wherein the one or more processors are further configured to:
obtain a plurality of training videos (Milne, [0162] “a plurality of container images is obtained. Block 2002 may include generating the container images (e.g., by visual inspection system 102 and VIS control module 120), and/or may include receiving the container images from another source (e.g., by VIS control module 120 or image pre-processing module 132, from a server maintaining image library 140)”), wherein each training video of the plurality of training videos:
captures an associated fluid filling process (Mile, [0090] FIG. 9A depicts example features that may vary in position between containers or container lots, for the specific case of a syringe 900. As seen in FIG. 9, features of syringe 900 that may vary in position include a meniscus position 902 (e.g., based on fill levels)”) (Edwards, [0042] “dispensing beer from the nozzle into the container; generating at least one image of the dispensed beer using at least one imaging device”); and
comprises a plurality of training images each labeled with one of the plurality of classifications (Milne, [0065] Computer system 104 may store and execute custom, user-facing software that facilitates the capture of training images (for image library 140), for the manual labeling of those images (to support supervised learning) prior to training the AVI neural network(s)… he GUI may include interactive controls that enable the user to specify the number of frames/images that visual inspection system 102 is to capture, the rotation angle between frames/images (if different perspectives are desired), and so on. The GUI (or another GUI generated by another program) may also display each captured frame/image to the user, and include user interactive controls for manipulating the image (e.g., zoom, pan, etc.) and for manually labeling the image (e.g., “defect observed” or “no defect” for image classification, or drawing boundaries within, or pixel-wise labeling, portions of images for object detection)”; see [0080] and [0148]); and
training the neural network model using the plurality of training videos (Milne, [0040] “system 100 is described herein as training and validating one or more AVI neural networks using container images from visual inspection system 102”; [0050] “AVI neural network module 116 then uses at least some of the container images in image library 140 to train the AVI neural network(s), and uses other container images in library 140 (or in another library not shown in FIG. 1) to validate the trained AVI neural network(s)”).
Regarding Claim 65, Milne in view of Edwards teaches the system of claim 48, wherein identifying the filling component or the fluid comprises:
identifying edge pixels in the image (Milne, [0086] “image portions 812, 814, 816 may be the entire inputs (training images) for the respective ones of the AVI neural networks, or the image pre-processing module 132 may pad the image portions 812, 814, 816 (e.g., with constant value pixels)”);
determining one or more bounded areas based on the edge pixels (Milne, [0046] “techniques that perform segmentation of the container image or image portion (e.g., pixel-by-pixel classification), or techniques that identify objects and place bounding boxes (or other boundary shapes) around those objects”); and
for each bounded area of the one or more bounded areas:
determining whether a size of the bounded area exceeds a respective threshold area for filling component or fluid (Milne, [0182] “the position of the particular area is compared to the position of a user-identified area to determine whether the trained AVI neural network correctly identified the object in the container image…includes determining whether a center of the particular, model-generated area falls within the user-identified area (or vice versa), or determining whether at least a threshold percentage of the particular, model-generated area overlaps the user-identified area the position of the particular area is compared to the position of a user-identified area to determine whether the trained AVI neural network correctly identified the object in the container image.”); and
if the size of the bounded area exceeds the respective threshold, selecting the bounded area as the filling component or the fluid (Milne, [0182] “the position of the particular area is compared to the position of a user-identified area to determine whether the trained AVI neural network correctly identified the object in the container image…includes determining whether a center of the particular, model-generated area falls within the user-identified area (or vice versa), or determining whether at least a threshold percentage of the particular, model-generated area overlaps the user-identified area the position of the particular area is compared to the position of a user-identified area to determine whether the trained AVI neural network correctly identified the object in the container image”).
Regarding Claim 66, Milne in view of Edwards teaches the system of claim 65, wherein:
identifying the edge pixels in the image comprises subtracting the image from a reference image preceding the image in the video to generate a subtracted image (Milne, [0124] “FIG. 14B, module 132 may detect the edges 1402a and 1402b using any suitable edge detection technique, and may output data indicative of the positions of edges 1402a and 1402b (e.g., relative to a center line 1410, corner, or other portion of the entire image). In FIG. 14B, reference lines 1412a and 1412b may represent expected edge locations (e.g., as stored in memory unit 114), which should also correspond to the corrected positions of edges 1402a and 1402b, respectively”).
Regarding Claim 67, Milne in view of Edwards teaches the system of claim 65, wherein the one or more processors are further configured to generate a bounding box of the filling component or the fluid based on the bounded area, wherein the bounding box is a minimum rectangular that encompasses the bounded area (Milne, [0147] “object detection, the building of image library 140 may be more onerous than for image classification in some respects, as users typically must manually draw bounding boxes (or boundaries of other defined or arbitrary two-dimensional shapes) around each relevant object, or pixel-wise label (e.g., “paint”) each relevant object”; Milne, [0149] “ an AVI neural network is shown what area to focus on (e.g., via a bounding box or other boundary drawn manually using a labeling tool,), and the neural network returns/generates a bounding box (or boundary of some other shape), or a pixel-wise classification of the image (if the neural network performs segmentation), to identify similar objects”; Milne, [0154] “ As seen in FIG. 19A, the neural network outputs a number of bounding boxes 1910 of different sizes, corresponding to detected objects of different sizes).
Regarding Claim 68, Milne in view of Edwards teaches the system of claim 48, wherein identifying the filling component or the fluid comprises:
generating a difference image between the image and a reference image in the video (Edwards, [0114] “the processor may determine a difference between a measured beverage characteristic and a desired beverage characteristic. The MCU 174 may then adjust the distance between the nozzle and the liquid upper surface as the beverage is dispensed to reduce the difference between the measured beverage characteristic and the desired beverage characteristic. For example, beverage characteristics may include, but are not limited, to the height of the beverage and/or the height of the foam”);
determining one or more edges in the image based on the difference image (Edwards, [0152] “FIG. 14A shows a modified image 186 of the distortion corrected image 185. The distortion corrected image 185 may be modified using an edge detection algorithm, as shown in the modified image 186”);
using the one or more edges to determine one or more bounded areas (Edwards, [0152] “Sobel operation and masking in the y-direction were used to identify the liquid upper surface 204. A Sobel filter may be applied in the vertical direction for edge detection of the transition between beer 200 and foam 210. The image 185 has been filtered based on the magnitude of the gradient”); and
selecting a bounded area that has a largest area among the one or more bounded areas as the filling component or the fluid (Milne, [0128] “the edge detection and thresholding techniques may be used for fabrication, adjustment and/or qualification of a production AVI system to ensure that the fixturing of containers relative to the camera(s) is correct and consistent”; Milne [0147] “”object detection” as used herein may broadly refer to any techniques that identify the particular location of an object (e.g., particle) within an image, and/or that identify the particular location of a feature of a larger object (e.g., a crack or chip on a syringe or cartridge barrel, etc.), and can include, for example, techniques that perform segmentation of the container image or image portion (e.g., pixel-by-pixel classification), or techniques that identify objects and place bounding boxes (or other boundary shapes) around those objects”).
Regarding Claim 69, Milne in view of Edwards teaches the system of claim 46, wherein the fluid is a pharmaceutical fluid, and the container is a transparent drug product container (Milne, [0052] “Container 214 may hold a liquid or lyophilized pharmaceutical product”; Milne, [0160] “FIGS. 20 through 23, “container” may refer to a syringe, cartridge, vial, or any other type of container, and may refer to a container holding contents (e.g., a pharmaceutical fluid or lyophilized product)”).
Allowable Subject Matter
Claim 52, 53, 60 and 61 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming any claim objections.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210375115 A1 METHOD AND SYSTEM FOR PERFORMING REAL-TIME MONITORING OF A CONTAINER CONTAINING A NON-GASEOUS FLUID SUBSTANCE
US 20200126210 A1 Defect Detection In Lyophilized Drug Products With Convolutional Neural Networks
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664