DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 9, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Piana et al. (US 20170343483 A1) in view of Duan et al. (IDS: An Adaptive Threshold Segmentation Algorithm for Impurity in Liquid Automatic Detecting System).
Regarding claim 1, Piana et al. disclose a method for impurity inspection in a product (Inspection machines are used e.g. in the beverage industry for examining empties, such as glass or plastic bottles, for damage, contamination or residues of liquid, [0002]) comprising: determining a product object to be inspected and placing the product object in an inspection state (a feed conveying device configured to feed containers to the inspection device in succession, [0006], [0067]); performing video capture on the product object in the inspection state from a plurality of different preset capture angles to obtain captured videos of the product object at each of the preset capture angles (For example, the whole container height may be illuminated with an LED area light, with one or a plurality of CCD cameras taking one or a plurality of pictures of the container sidewall from different angles of view. For example, two pictures of the sidewall can be taken from different angles of view via a camera and an optical system comprising four mirrors, the angles of view deviating from one another e.g. by 90° in the circumferential direction, [0047]); performing image processing on the captured video to obtain a captured image corresponding to the preset capture angle (The data of this inspection station can be transmitted to the processing unit 180, which is here schematically shown, for further processing, [0069]); and performing impurity inspection on the product object according to the captured images corresponding to each of the preset capture angles to generate an inspection result corresponding to the product object (The sensor data or optical data recorded by the inspection stations may be transmitted to an evaluation unit, e.g. a computing unit, of the inspection device, which will evaluate the data automatically so as to detect damage or contamination, [0049], The processing unit 180 evaluates the data automatically, so as to detect e.g. damage of the container bottom, [0069]).
Piana et al. do not disclose using video.
Duan et al. teach performing video capture on the product object in the inspection state to obtain captured videos of the product object (Be in the detection section, the bottle stops rotation, and the digital camera is triggered and a series of images of moving medicinal liquid is get and delivered to industrial computer for image processing, part II, video sequences, part III) performing image processing on the captured video (delivered to industrial computer for image processing, part II, Because of the less moving information of two contiguous frames difference, in this paper, the moving information is achieved from the intersectant-frame difference with 4 continuous frames, and the fourth-order moment of the summation with a threshold proportional to the estimated background activity is performed, part III, The thresholded images contain targets and high-level noise. Based on the connectivity and geometry characteristics of impurity and noise, and considering that the number of pixels of the connected domain formed by the moving impurity is about 20 to 600 pixels, and the length-width ratio is about 0.7 ~ 1.8, it can be regard as the testing condition to discard the noise points and give out the detection result. In this paper, a post-processing algorithm is introduced for target tracking, part IVB) performing impurity inspection on the product object according to the captured images to generate an inspection result corresponding to the product object (bottle which has impurity will be marked with serial number, part II, Repeat step3 and give out the detection result, part IVB).
Piana et al. and Duan et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract). The combination of Duan et al. with Piana et al. enables using video. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the video of Duan et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Duan et al. indicate “In the first stage,
a new method of extracting information of moving targets in visible image sequences is proposed, which is based on the fourth-order moment of the summation of inter-frame difference with 5 continuous frames. The experimental and factual testing results show that the
proposed algorithm can meet the demands of impurity in medicinal liquid in real-time detecting, and that it is a practicable and effective image segmentation method” (abstract) showing how particulate movement information can be helpful in detecting impurities which will improve the inspection capabilities of Piana et al.
Regarding claim 2, Piana et al. and Duan et al. disclose the method according to claim 1. Piana et al. and Duan et al. further indicate the placing the product object in the inspection state comprises: placing the product object in an inspection state of a target tilt angle (Piana et al., “Due to the comblike interengagement of the clamps, the container is here reliably prevented from tilting while it is being conveyed along the throughput station”, [0088]) [tilt prevented is interpreted as 0 degrees tilt/flat] as well as vibration and/or rotation (Piana et al., By continuing to move the conveying unit 443a at a higher speed than the conveying unit 443b, this rotation of the container 433 will continue until a rotation of about 90° has taken place, [0085], Fig. 4:
PNG
media_image1.png
292
637
media_image1.png
Greyscale
; Duan et al., Firstly, when the bottle of medicinal liquid which needs detected is transmitted into the rotating acceleration section, the medicinal liquid will be rotated, and impurity that suspended or settled at the bottom of the bottle come into moving, at the same time, medicinal liquid is distinguished from the spots which belong to the bottle, part II, See also Fig. 1:
PNG
media_image2.png
268
380
media_image2.png
Greyscale
).
Regarding claim 9, Piana et al. disclose a system for impurity inspection in a product (Inspection machines are used e.g. in the beverage industry for examining empties, such as glass or plastic bottles, for damage, contamination or residues of liquid, [0002]), comprising: an infeed apparatus (individual containers to be taken over from an infeed flow, [0022], containers contacting one another in the infeed of the inspection device, [0044]), a conveyor apparatus (conveyor track for conveying the container or the containers, [0043]), a number of collection apparatuses corresponding to different preset capture angles (For example, the whole container height may be illuminated with an LED area light, with one or a plurality of CCD cameras taking one or a plurality of pictures of the container sidewall from different angles of view. For example, two pictures of the sidewall can be taken from different angles of view via a camera and an optical system comprising four mirrors, the angles of view deviating from one another e.g. by 90° in the circumferential direction, [0047]), and an apparatus for impurity inspection (The sensor data or optical data recorded by the inspection stations may be transmitted to an evaluation unit, e.g. a computing unit, of the inspection device, which will evaluate the data automatically so as to detect damage or contamination, [0049]); wherein, the infeed apparatus is configured to convey a product object to be inspected and place the product object in an inspection state (individual containers to be taken over from an infeed flow, [0022], containers contacting one another in the infeed of the inspection device, [0044], infeed side at the feed conveying device, [0072]); and the collection apparatuses are configured to perform video capture on the product object in the inspection state during a process of conveying the product object by the conveyor apparatus, to obtain captured videos of the product object at each of the preset capture angles (For example, the whole container height may be illuminated with an LED area light, with one or a plurality of CCD cameras taking one or a plurality of pictures of the container sidewall from different angles of view. For example, two pictures of the sidewall can be taken from different angles of view via a camera and an optical system comprising four mirrors, the angles of view deviating from one another e.g. by 90° in the circumferential direction, [0047]); the apparatus for impurity inspection is configured to perform image processing on the captured video to obtain a captured image corresponding to the preset capture angle (The data of this inspection station can be transmitted to the processing unit 180, which is here schematically shown, for further processing, [0069]), and perform impurity inspection on the product object according to the captured images corresponding to each of the preset capture angles to generate an inspection result corresponding to the product object (The sensor data or optical data recorded by the inspection stations may be transmitted to an evaluation unit, e.g. a computing unit, of the inspection device, which will evaluate the data automatically so as to detect damage or contamination, [0049], The processing unit 180 evaluates the data automatically, so as to detect e.g. damage of the container bottom, [0069]).
Piana et al. do not disclose using video.
Duan et al. teach the collection apparatuses are configured to perform video capture on the product object in the inspection state during a process of conveying the product object by the conveyor apparatus, to obtain captured videos of the product object (
PNG
media_image2.png
268
380
media_image2.png
Greyscale
, in the detection section, the bottle stops rotation, and the digital camera is triggered and a series of images of moving medicinal liquid is get and delivered to industrial computer for image processing, part II, video sequences, part III) the apparatus for impurity inspection is configured to perform image processing on the captured video to obtain a captured image corresponding to the preset capture angle (delivered to industrial computer for image processing, part II, Because of the less moving information of two contiguous frames difference, in this paper, the moving information is achieved from the intersectant-frame difference with 4 continuous frames, and the fourth-order moment of the summation with a threshold proportional to the estimated background activity is performed, part III, The thresholded images contain targets and high-level noise. Based on the connectivity and geometry characteristics of impurity and noise, and considering that the number of pixels of the connected domain formed by the moving impurity is about 20 to 600 pixels, and the length-width ratio is about 0.7 ~ 1.8, it can be regard as the testing condition to discard the noise points and give out the detection result. In this paper, a post-processing algorithm is introduced for target tracking, part IVB) and perform impurity inspection on the product object according to the captured images to generate an inspection result corresponding to the product object (bottle which has impurity will be marked with serial number, part II, Repeat step3 and give out the detection result, part IVB).
Piana et al. and Duan et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract). The combination of Duan et al. with Piana et al. enables using video. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the video of Duan et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Duan et al. indicate “In the first stage,
a new method of extracting information of moving targets in visible image sequences is proposed, which is based on the fourth-order moment of the summation of inter-frame difference with 5 continuous frames. The experimental and factual testing results show that the
proposed algorithm can meet the demands of impurity in medicinal liquid in real-time detecting, and that it is a practicable and effective image segmentation method” (abstract) showing how particulate movement information can be helpful in detecting impurities which will improve the inspection capabilities of Piana et al.
Regarding claim 11, Piana et al. and Duan et al. disclose the system according to claim 9. Piana et al. and Duan et al. further indicate (Piana et al., the whole container height may be illuminated with an LED area light, with one or a plurality of CCD cameras taking one or a plurality of pictures of the container sidewall from different angles of view. For example, two pictures of the sidewall can be taken from different angles of view via a camera and an optical system comprising four mirrors, the angles of view deviating from one another e.g. by 90° in the circumferential direction, [0047]; Duan et al.,
PNG
media_image2.png
268
380
media_image2.png
Greyscale
proposed algorithm can meet the demands of impurity in medicinal liquid in real-time detecting, abstract, intersectant-frame difference with 4 continuous frames part III).
Claim(s) 3, 4, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Piana et al. (US 20170343483 A1) and Duan et al. (IDS: An Adaptive Threshold Segmentation Algorithm for Impurity in Liquid Automatic Detecting System) as applied to claim 1 above, further in view of Milne et al. (US 20220230394 A1).
Regarding claim 3, Piana et al. and Duan et al. disclose the method according to claim 1. Piana et al. and Duan et al. further indicate extracting a target number of image frames from the captured video corresponding to the preset capture angle (Duan et al., 5 continuous frames, abstract, Given 4 continuous frames, part III) but do not explicitly disclose performing image alignment on the image frames corresponding to each of the preset capture angles to obtain the captured images corresponding to each of the preset capture angles.
Milne et al. teach extracting a target number of image frames from the captured video corresponding to the preset capture angle (reconstruct the 3D particle (or other object) in a single 3D frame based on the 3D frames immediately before and after the occlusion event, [0071], one or more of the discrete objects are tracked across two or more of the 3D images (frames), [0079]); and performing image alignment on the image frames corresponding to each of the preset capture angles to obtain the captured images corresponding to each of the preset capture angles (Thereafter, a digital resampling module 634 of application 620 pre-processes the 2D images captured by visual inspection system 602, using the calibration data generated by calibration module 632, to spatially normalize/align the 2D images from the different cameras (e.g., with pixel-level precision), cameras have different angles of inclination relative to the vessel, [0054], generating calibration data (e.g., correction factors/matrices) that will be used to align the 2D images captured by visual inspection system 602 during the inspection process, [0055], In some embodiments, calibration module 632 (or another module or application) also facilitates a validation stage at which it is determined whether the calibration data generated for the various cameras properly aligns the images. In one such embodiment where images of three cameras are calibrated (e.g., cameras 302a through 302c), a monochrome calibration image from each camera is input to each color channel (i.e., red, green and blue channels) of a composite RGB image, [0061], By using 3D look-up tables, for example, the system can be expanded to any arbitrary number of cameras, at any orientation relative to the vessel/sample (e.g. at points around a sphere that is centered on the vessel). Because the look-up-tables are pre-calculated, adding cameras at different angles, potentially with different lenses, does not necessarily add a substantial computational burden when inspecting any given sample lot, [0073], Further, a full 3D ray optics approach, along with knowledge of the true orientation of the vessel shape relative to the vertical axis, can allow for full correction of any spatial alignment errors in the images of different cameras (even, in some embodiments, if the software-based calibration techniques described above are not implemented), [0074], At block 902, at least three 2D images of a sample in a vessel (e.g., vessel 400 or 500) are captured by at least three cameras located at different positions around the vessel (e.g., cameras 302a through 302d, or cameras 302b through 302d, of FIG. 3). The optical axis of a first one of the cameras is inclined or declined at a first angle (greater than or equal to 0 degrees) relative to the horizontal plane, and the optical axis of a second one of the cameras is inclined or declined at a second angle relative to the horizontal plane, with the second angle being at least five degrees greater than the first angle., [0077]).
Piana et al. and Duan et al. and Milne et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract; Milne et al., [0031]). The combination of Milne et al. with Piana et al. enables performing image alignment. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the alignment of Milne et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Milne et al. indicate “In some embodiments, software techniques are used to bolster the accuracy of particle tracking (e.g., in addition to the 3D imaging techniques discussed above)” ([0071]) and “The 3D imaging techniques described above may provide various advantages, in addition to enhanced accuracy for particle detection, sizing, shape determination, classification, and/or tracking. By using 3D look-up tables, for example, the system can be expanded to any arbitrary number of cameras, at any orientation relative to the vessel/sample (e.g. at points around a sphere that is centered on the vessel). Because the look-up-tables are pre-calculated, adding cameras at different angles, potentially with different lenses, does not necessarily add a substantial computational burden when inspecting any given sample lot. In addition to providing more informative camera perspectives, this approach provides versatility, e.g., by allowing cameras to be moved in order to accommodate other system design considerations, such as conveyance for the vessels/samples under scrutiny (e.g., the path of a robotic arm). This may be important when integrating the technology into larger, automated commercial manufacturing platforms, for example” ([0073]) demonstrating an improvement to accuracy while not compromising modular adaptability and computational efficiency.
Regarding claim 4, Piana et al., Duan et al., and Milne et al. disclose the method according to claim 3. Milne et al. further indicate the performing the image alignment on the image frames corresponding to each of the preset capture angles to obtain the captured images corresponding to each of the preset capture angles comprises: extracting product feature information corresponding to the product object in each of the image frames; and performing image correction on the image frames according to the product feature information corresponding to each of the image frames to obtain the captured images corresponding to each of the preset capture angles (In some embodiments, VIS control module 630 causes visual inspection system 602 to perform certain calibration-related procedures, such as capturing 2D calibration images, and a calibration module 632 of application 620 processes the calibration images to generate calibration data (e.g., correction factors, matrices, etc.). Thereafter, a digital resampling module 634 of application 620 pre-processes the 2D images captured by visual inspection system 602, using the calibration data generated by calibration module 632, to spatially normalize/align the 2D images from the different cameras (e.g., with pixel-level precision), [0054], In this calibration procedure, VIS control module 630 causes the cameras of visual inspection system 602 (e.g., cameras 302a through 302d) to sequentially capture respective 2D images of the vessel under backlit conditions. The vessel (e.g., held or supported by fixture 306) may be empty or filled with a sample (e.g., a liquid drug product). In some embodiments, for example, calibration procedures are repeated for each new vessel/sample, in which case a sample will be present in the vessel during each iteration thereof, [0055], “Calibration module 632 processes each of the 2D calibration images from the various cameras to determine at least a horizontal offset of the image, a vertical offset of the image, and a rotation of the image… In other embodiments, calibration module 632 compares the locations of the detected edges to the locations of edges detected in a different one of the calibration images, which is used as a reference image. For example, calibration module 632 may compare the locations of edges detected in the calibration images obtained by cameras 302b through 302d with the locations of the edges detected in the calibration image obtained by camera 302a. Calibration module 632 may store the calibration data for each camera in memory unit 612 or another suitable location”, [0057], If the images are properly aligned, any non-transparent objects within the composite RGB image (including portions of the vessel, possibly) should appear white. In one embodiment, calibration module 632 causes the composite RGB image to be presented via a user interface shown on display 616, such that a human user can quickly confirm whether proper alignment has been achieved. Alternatively, calibration module 632 may process the composite RGB image to detect any red, blue, and/or green areas, and determine whether proper alignment has been achieved based on those areas (if any). For example, calibration module 632 may determine that the calibration procedure failed when detecting more than a threshold number of single- or two-colored pixels (possibly excluding areas where particles may reside in a sample), [0061], synchronize the 2D images captured by the multiple cameras, [0068], For each trajectory, application 620 may generate time-stamped measurements of particle size, particle shape, and/or other metrics, [0071], a full 3D ray optics approach, along with knowledge of the true orientation of the vessel shape relative to the vertical axis, can allow for full correction of any spatial alignment errors in the images of different cameras, [0074]) [vessel edges, particle trajectory - interpreted as “product feature information”]
Regarding claim 14, Piana et al. and Duan et al. disclose the method according to claim 1. Piana et al. and Duan et al. partly indicate a non-transitory computer readable storage medium, storing instructions which, when executed by one or more processors, enable the processor to implement the method according to claim 1 (Piana et al., the conveyor arrangement may comprise an open-loop and/or closed-loop control unit, in particular a process computer, for controlling the at least one conveying unit, [0043]; Duan et al., industrial computer for image processing, part II) [computer indicates a computer readable medium] however another reference is added to make this explicit.
Milne et al. teach a non-transitory computer readable storage medium, storing instructions which, when executed by one or more processors, enable the processor to implement the method according to claim 1 (Processing unit 610 includes one or more processors, each of which may be a programmable microprocessor that executes software instructions stored in memory 612 to execute some or all of the functions of computing system 604 as described herein. Processing unit 610 may include one or more graphics processing units (GPUs) and/or one or more central processing units (CPUs), for example. Alternatively, or in addition, some of the processors in processing unit 610 may be other types of processors (e.g., application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.), and some of the functionality of computing system 604 as described herein may instead be implemented in hardware. Memory unit 612 may include one or more volatile and/or non-volatile memories. Any suitable memory type or types may be included in memory unit 612, such as read-only memory (ROM), random access memory (RAM), flash memory, a solid-state drive (SSD), a hard disk drive (HDD), and so on. Collectively, memory unit 612 may store one or more software applications, the data received/used by those applications, and the data output/generated by those applications, [0051]).
Piana et al. and Duan et al. and Milne et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract; Milne et al., [0031]). The combination of Milne et al. with Piana et al. enables using a processor. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the processor of Milne et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Milne et al. indicate “In some embodiments, software techniques are used to bolster the accuracy of particle tracking (e.g., in addition to the 3D imaging techniques discussed above)” ([0071]) and “The 3D imaging techniques described above may provide various advantages, in addition to enhanced accuracy for particle detection, sizing, shape determination, classification, and/or tracking. By using 3D look-up tables, for example, the system can be expanded to any arbitrary number of cameras, at any orientation relative to the vessel/sample (e.g. at points around a sphere that is centered on the vessel). Because the look-up-tables are pre-calculated, adding cameras at different angles, potentially with different lenses, does not necessarily add a substantial computational burden when inspecting any given sample lot. In addition to providing more informative camera perspectives, this approach provides versatility, e.g., by allowing cameras to be moved in order to accommodate other system design considerations, such as conveyance for the vessels/samples under scrutiny (e.g., the path of a robotic arm). This may be important when integrating the technology into larger, automated commercial manufacturing platforms, for example” ([0073]) demonstrating an improvement to accuracy while not compromising modular adaptability and computational efficiency.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Piana et al. (US 20170343483 A1) and Duan et al. (IDS: An Adaptive Threshold Segmentation Algorithm for Impurity in Liquid Automatic Detecting System) as applied to claim 9 above, further in view of Kobayashi (Machine Translation JP 2006111416 A) further in view of Fujio et al. (US 20200354164 A1) further in view of Dragotta (US 20010033372 A1).
Regarding claim 10, Piana et al. and Duan et al. disclose the method according to claim 1. Piana et al. and Duan et al. further indicate the infeed apparatus comprises at least a front conveyor belt (Piana et al., conveyor belt, [0013], [0067]; Duan et al., see conveyor Fig. 1), a corner clamp belt (Piana et al., The clamps of the holding devices may, for example, be configured such that they act laterally on the containers. The ends of the clamps may have provided thereon a respective support roller so that the conveyed container can deliberately be rotated by one or a plurality of friction belts acting thereon from the side, said friction belts being arranged in the area of the throughput station, [0023], end-mounted rollers provided on the clamps and holding the conveyed containers in a form-fit manner, [0090]), and a lamp inspection clamp belt (Piana et al., conveying route of the conveyor arrangement, it is especially possible to provide an inspection unit for inspecting the container bottom, in the case of which a camera records an image of the container bottom, which is illuminated by an LED flash lamp, [0009], first and second conveyor tracks, bottom inspection station 150 recording, e.g. by means of a CCD camera, an optical picture of the bottom of the container 132 illuminated by an LED flash lamp, [0069]; Duan et al., see lighting in Fig.1); the corner clamp is configured to place the product object to be inspected conveyed by the front conveyor belt at a tilt (Piana et al., In the course of this process, the resilient elements 475 may be compressed at least partially so that the Y-legs of the clamps 412 abutting in a direction laterally to the conveying direction will be pressed with sufficient force against the container wall of the container 431. The dimensions of the Y-legs in the longitudinal direction of the containers can here be chosen such that the static friction prevailing between the clamps 412 and the outer surface of the container will be sufficiently high for reliably holding the conveyed container 431, [0084]); the lamp inspection clamp belt is configured to perform vibration processing on the product object (Duan et al., Firstly, when the bottle of medicinal liquid which needs detected is transmitted into the rotating acceleration section, the medicinal liquid will be rotated, and impurity that suspended or settled at the bottom of the bottle come into moving, at the same time, medicinal liquid is distinguished from the spots which belong to the bottle, part II); the front conveyor belt is configured to convey the product object in a state of tilt and vibration to the conveyor apparatus (Duan et al., Firstly, when the bottle of medicinal liquid which needs detected is transmitted into the rotating acceleration section, the medicinal liquid will be rotated, and impurity that suspended or settled at the bottom of the bottle come into moving, at the same time, medicinal liquid is distinguished from the spots which belong to the bottle, part II).
Piana et al. and Duan et al. do not disclose the corner clamp is configured to place the product object to be inspected conveyed by the front conveyor belt at a tilt; the front conveyor belt is configured to convey the product object in a state of 45 degree tilt and vibration to the conveyor apparatus.
Kobayashi teaches the corner clamp is configured to place the product object to be inspected conveyed by the front conveyor belt at a tilt; the front conveyor belt is configured to convey the product object in a state of tilt and vibration to the conveyor apparatus (Therefore, the present invention was created to eliminate the above-mentioned difficulties, and the conveying system described in claim 1 of the present invention employs means that includes at least a primary position-changing conveying device A that can change the position of the conveyed object in a direction approximately perpendicular to the conveying direction by clamping and holding it with conveying belts 5 and 6 that are in a twisted state, and a secondary position-changing conveying device B that can further change the position of the conveyed object in a direction approximately perpendicular to the conveying direction by clamping and holding it with conveying belts 5 and 6 that are in a twisted state, [0006], The present invention is a conveying system having a posture-changing conveying device that clamps the left and right sides of a beverage (e.g., a beverage filled in a PET bottle), food (e.g., food in a packaging container), medicine (e.g., medicine in a packaging bottle), the packaging container itself, the packaging bottle itself, or any other suitable object to be conveyed, and gradually changes the posture of the object being conveyed in a direction (twisting direction) that is approximately perpendicular to the conveying direction (e.g., from an upright position to an overturned position, from an overturned position to an upright position, from an overturned position to an inverted position, or from an inverted position to an overturned position, etc.) in order to convey the object, [0021]).
Piana et al. and Duan et al. and Kobayashi are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract; Kobayashi, [0021]). The combination of Kobayashi with Piana et al. and Duan et al. enables using a clamp to tilt a bottle. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the clamp of Kobayashi with the invention of Piana et al. and Duan et al. as this was known at the time of filing, the combination would have predictable results, and as Kobayashi indicates “The present invention relates to a conveying system that conveys suitable items such as beverages, food, medicines, and other items contained in bottles, cans, PET bottles, etc., and that allows the position of the items to be freely changed. In particular, the present invention relates to a conveying system that has a position-changing conveying device that can convey items held by clamping the left and right sides by gradually changing the position of the items in a direction (torsion direction) that is approximately perpendicular to the conveying direction (for example, from an upright state to an overturned state, from an overturned state to an upright state, from an overturned state to an inverted state, or from an inverted state to an overturned state, etc.), allowing for effortless position control of the items to be conveyed, enabling space-saving of the device, reducing the load (tension) on the conveying belt, reducing noise, improving durability, having a simple configuration, being easy to maintain, being adaptable to installation conditions, and having a position-changing conveying device that is designed to improve conveying efficiency and productivity” ([0001]) thereby providing an adaptability and durability improvement to the combination of inventions.
Piana et al. and Duan et al. and Kobayashi do not disclose a 45 degree tilt or vibration specifically.
Fujio et al. teach the infeed apparatus comprises at least a front conveyor belt, a corner clamp belt, and a lamp inspection clamp belt; the corner clamp is configured to place the product object to be inspected conveyed by the front conveyor belt at a 45 degree tilt; the front conveyor belt is configured to convey the product object in a state of 45 degree tilt and vibration to the conveyor apparatus (When the photoelectric sensor (32, 33) is arranged higher than the support shaft (7) in side view, the distance between the photoelectric sensor and the bottom surface of the conveyed object to be detected can be further shortened. The photoelectric sensor (32, 33) can be attached such that a projection direction of its light beam is tilted on a tilting direction side toward the tilted orientation of the tilting conveyor unit (10, 11) with respect to a vertical direction. In this case, the tilt angle of the projection direction is set to an angle capable of detecting both a conveyed object transferring between the tilting conveyor units (10, 11) when the tilting conveyor units (10, 11) are in the horizontal orientation and a conveyed object transferring between the tilting conveyor units (10, 11) when the tilting conveyor units (10, 11) are in the tilted orientation. Specifically, for example, when the tilting conveyor unit is tilted, for example, 45 degrees in the left (or right) direction with respect to the vertical line passing through the axial center of the support shaft, if the light beam projection direction of the photoelectric sensor is configured to be tilted in the same direction by 22.5 degrees, a half of the tilt angle of the tilting conveyor unit, the reflected light beam can be received by the photoelectric sensor at the fixed position, for example, at the right end side of a light beam reflection area from the horizontal bottom surface of the conveyed object when the tilting conveyor unit is in the horizontal orientation, and the reflected light beam can be received by the photoelectric sensor at the fixed position, for example, at the left end side of the light beam reflection area from the tilted bottom surface of the conveyed object when the tilting conveyor unit is in the tilted orientation. That is, by utilizing the entire area of the light beam reflection area from the bottom surface of the conveyed object, the conveyed object passing between the tilting conveyor units can be reliably detected by the photoelectric sensor at the fixed position when the tilting conveyor unit is in any of the horizontal orientation and the tilted orientation, [0011]).
Piana et al. and Duan et al. and Fujio et al. are in the same art of imaging conveyed objects (Piana et al., abstract, [0009]; Duan et al., abstract, Fig. 1; Fujio et al., [0011]). The combination of Fujio et al. with Piana et al. and Duan et al. and Kobayashi enables using a 45 degree angle. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the angle of Fujio et al. with the invention of Piana et al. and Duan et al. and Kobayashi as this was known at the time of filing, the combination would have predictable results, and as Fujio et al. indicate “The sorting operation is performed by tilting the tilting conveyor units supporting and conveying the tray while the conveyance of the tray by each tilting conveyor unit is continued in order not to reduce the conveyance efficiency. When the conveying and sorting apparatus is used in this manner, in order to decrease the conveying pitch of the tray and increase the conveyance efficiency, the length in the conveying direction of each tilting conveyor unit is configured to be sufficiently shortened with respect to the length of the tray to support one tray by a plurality of tilting conveyor units, and the tilting conveyor units are controlled as follows” ([0003]) “Moreover, at the time of detection of the conveyed object, there is no need to compare the detection results of the surface of the conveyed object with the detection results of an object other than the conveyed object, for example, the surface of the tilting conveyor unit, to determine whether or not the conveyed object is detected. Thus, the detection of the conveyed object in the sorting area can be performed remarkably accurately and reliably” ([0008]) providing a time efficiency benefit and therefore commercial benefit to the combination of inventions.
Piana et al. and Duan et al. and Kobayashi and Fujio et al. do not disclose vibration specifically.
Dragotta teaches the front conveyor belt is configured to convey the product object in a state of vibration to the conveyor apparatus (An apparatus is provided for optically inspecting containers of liquid solutions. The apparatus includes a fixture for gripping the container and a conveyor or indexable table for moving the fixtured container into alignment with a camera or other optical inspection device. The apparatus further includes a vibrator at the inspection station. The vibrator causes the container of the liquid solution to vibrate sufficiently for extraneous material in the solution to move into a position that permits accurate visual inspection, abstract, Each pair of gripping fingers 26 is operative for securely holding a syringe 12 in a selected position and orientation, [0022], As shown most clearly in FIGS. 3 and 4, the apparatus 10 includes a vibrator 34. The vibrator 34 is operative to vibrate the fixture 24 as the fixture is indexed into the inspection station 27. The vibration is of sufficient amplitude, duration and frequency to cause minor agitation of the liquid pharmaceutical product in the syringe 12 that is sufficient to move any extraneous material therein into a more central position within the syringe 12. The vibrator 34 further is operative to stop vibrating as the fixture reaches the inspection station, or shortly after the fixture has reached the inspection station, [0024]).
Piana et al. and Duan et al. and Dragotta are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract; Dragotta, abstract). The combination of Dragotta with Piana et al. and Duan et al. and Kobayashi and Fujio et al. enables using vibration. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the vibration of Dragotta with the invention of Piana et al. and Duan et al. and Kobayashi and Fujio et al.as this was known at the time of filing, the combination would have predictable results, and as Dragotta indicates “The apparatus further includes a vibrator at the inspection station. The vibrator causes the container of the liquid solution to vibrate sufficiently for extraneous material in the solution to move into a position that permits accurate visual inspection” (abstract) thereby improving the accuracy of the inspection processes when combined.
Claim(s) 12 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Piana et al. (US 20170343483 A1) in view of Duan et al. (IDS: An Adaptive Threshold Segmentation Algorithm for Impurity in Liquid Automatic Detecting System) in view of Milne et al. (US 20220230394 A1).
Regarding claim 12, Piana et al. disclose an apparatus for impurity inspection in a product, (Inspection machines are used e.g. in the beverage industry for examining empties, such as glass or plastic bottles, for damage, contamination or residues of liquid, [0002]) comprising: determining a product object to be inspected and placing the product object in an inspection state (a feed conveying device configured to feed containers to the inspection device in succession, [0006], [0067]); performing video capture on the product object in the inspection state from a plurality of different preset capture angles to obtain captured videos of the product object at each of the preset capture angles (For example, the whole container height may be illuminated with an LED area light, with one or a plurality of CCD cameras taking one or a plurality of pictures of the container sidewall from different angles of view. For example, two pictures of the sidewall can be taken from different angles of view via a camera and an optical system comprising four mirrors, the angles of view deviating from one another e.g. by 90° in the circumferential direction, [0047]); performing image processing on the captured video to obtain a captured image corresponding to the preset capture angle (The data of this inspection station can be transmitted to the processing unit 180, which is here schematically shown, for further processing, [0069]); and performing impurity inspection on the product object according to the captured images corresponding to each of the preset capture angles to generate an inspection result corresponding to the product object (The sensor data or optical data recorded by the inspection stations may be transmitted to an evaluation unit, e.g. a computing unit, of the inspection device, which will evaluate the data automatically so as to detect damage or contamination, [0049], The processing unit 180 evaluates the data automatically, so as to detect e.g. damage of the container bottom, [0069]).
Piana et al. do not disclose using video. Piana et al. do not explicitly disclose a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other through the communication bus; the memory is configured to store a computer program.
Duan et al. teach performing video capture on the product object in the inspection state to obtain captured videos of the product object (Be in the detection section, the bottle stops rotation, and the digital camera is triggered and a series of images of moving medicinal liquid is get and delivered to industrial computer for image processing, part II, video sequences, part III) performing image processing on the captured video (delivered to industrial computer for image processing, part II, Because of the less moving information of two contiguous frames difference, in this paper, the moving information is achieved from the intersectant-frame difference with 4 continuous frames, and the fourth-order moment of the summation with a threshold proportional to the estimated background activity is performed, part III, The thresholded images contain targets and high-level noise. Based on the connectivity and geometry characteristics of impurity and noise, and considering that the number of pixels of the connected domain formed by the moving impurity is about 20 to 600 pixels, and the length-width ratio is about 0.7 ~ 1.8, it can be regard as the testing condition to discard the noise points and give out the detection result. In this paper, a post-processing algorithm is introduced for target tracking, part IVB) performing impurity inspection on the product object according to the captured images to generate an inspection result corresponding to the product object (bottle which has impurity will be marked with serial number, part II, Repeat step3 and give out the detection result, part IVB).
Piana et al. and Duan et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract). The combination of Duan et al. with Piana et al. enables using video. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the video of Duan et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Duan et al. indicate “In the first stage,
a new method of extracting information of moving targets in visible image sequences is proposed, which is based on the fourth-order moment of the summation of inter-frame difference with 5 continuous frames. The experimental and factual testing results show that the
proposed algorithm can meet the demands of impurity in medicinal liquid in real-time detecting, and that it is a practicable and effective image segmentation method” (abstract) showing how particulate movement information can be helpful in detecting impurities which will improve the inspection capabilities of Piana et al.
Piana et al. and Duan et al. do not explicitly disclose a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other through the communication bus; the memory is configured to store a computer program.
Milne et al. teach a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory communicate with each other through the communication bus; the memory is configured to store a computer program (System 600 includes a visual inspection system 602 communicatively coupled to a computing system 604. Visual inspection system 602 includes hardware (e.g., a stage or platform, three or more cameras, etc.), as well as firmware and/or software, that is configured to capture digital 2D images of a sample within a vessel, [0048], Computing system 604 may generally be configured to control/automate the operation of visual inspection system 602, and to receive and process images captured/generated by visual inspection system 602, as discussed further below. Computing system 604 is also coupled to (or includes) a display 616, via which computing system 604 may render visual information to a user. Computing system 604 may be a general-purpose computer that is specifically programmed to perform the operations discussed herein, or may be a special-purpose computing device. As seen in FIG. 6, computing system 604 includes a processing unit 610 and a memory unit 612. In some embodiments, however, computing system 604 includes two or more computers that are either co-located or remote from each other. In these distributed embodiments, the operations described herein relating to processing unit 610 and memory unit 612 may be divided among multiple processing units and/or memory units, respectively, [0050], Processing unit 610 includes one or more processors, each of which may be a programmable microprocessor that executes software instructions stored in memory 612 to execute some or all of the functions of computing system 604 as described herein. Processing unit 610 may include one or more graphics processing units (GPUs) and/or one or more central processing units (CPUs), for example. Alternatively, or in addition, some of the processors in processing unit 610 may be other types of processors (e.g., application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.), and some of the functionality of computing system 604 as described herein may instead be implemented in hardware. Memory unit 612 may include one or more volatile and/or non-volatile memories. Any suitable memory type or types may be included in memory unit 612, such as read-only memory (ROM), random access memory (RAM), flash memory, a solid-state drive (SSD), a hard disk drive (HDD), and so on. Collectively, memory unit 612 may store one or more software applications, the data received/used by those applications, and the data output/generated by those applications.
Piana et al. and Duan et al. and Milne et al. are in the same art of container inspection (Piana et al., abstract; Duan et al., abstract; Milne et al., [0031]). The combination of Milne et al. with Piana et al. enables using a processor. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the processor of Milne et al. with the invention of Piana et al. as this was known at the time of filing, the combination would have predictable results, and as Milne et al. indicate “In some embodiments, software techniques are used to bolster the accuracy of particle tracking (e.g., in addition to the 3D imaging techniques discussed above)” ([0071]) and “The 3D imaging techniques described above may provide various advantages, in addition to enhanced accuracy for particle detection, sizing, shape determination, classification, and/or tracking. By using 3D look-up tables, for example, the system can be expanded to any arbitrary number of cameras, at any orientation relative to the vessel/sample (e.g. at points around a sphere that is centered on the vessel). Because the look-up-tables are pre-calculated, adding cameras at different angles, potentially with different lenses, does not necessarily add a substantial computational burden when inspecting any given sample lot. In addition to providing more informative camera perspectives, this approach provides versatility, e.g., by allowing cameras to be moved in order to accommodate other system design considerations, such as conveyance for the vessels/samples under scrutiny (e.g., the path of a robotic arm). This may be important when integrating the technology into larger, automated commercial manufacturing platforms, for example” ([0073]) demonstrating an improvement to accuracy while not compromising modular adaptability and computational efficiency.
Regarding claim 16, Piana et al. and Duan et al. and Milne et al. disclose the apparatus according to claim 12. Piana et al. and Duan et al. further indicate the processor is further caused to: place the product object in an inspection state of a target tilt angle as well as vibration and/or rotation (Piana et al., “Due to the comblike interengagement of the clamps, the container is here reliably prevented from tilting while it is being conveyed along the throughput station”, [0088]) [tilt prevented is interpreted as 0 degrees tilt/flat] as well as vibration and/or rotation (Piana et al., By continuing to move the conveying unit 443a at a higher speed than the conveying unit 443b, this rotation of the container 433 will continue until a rotation of about 90° has taken place, [0085], Fig. 4:
PNG
media_image1.png
292
637
media_image1.png
Greyscale
; Duan et al., Firstly, when the bottle of medicinal liquid which needs detected is transmitted into the rotating acceleration section, the medicinal liquid will be rotated, and impurity that suspended or settled at the bottom of the bottle come into moving, at the same time, medicinal liquid is distinguished from the spots which belong to the bottle, part II, See also Fig. 1:
PNG
media_image2.png
268
380
media_image2.png
Greyscale
).
Regarding claim 17, Piana et al. and Duan et al. and Milne et al. disclose the apparatus according to claim 12. Piana et al. and Milne et al. further indicate extracting a target number of image frames from the captured video corresponding to the preset capture angle (Duan et al., 5 continuous frames, abstract, Given 4 continuous frames, part III; Milne et al., reconstruct the 3D particle (or other object) in a single 3D frame based on the 3D frames immediately before and after the occlusion event, [0071], one or more of the discrete objects are tracked across two or more of the 3D images (frames), [0079]); and performing image alignment on the image frames corresponding to each of the preset capture angles to obtain the captured images corresponding to each of the preset capture angles (Milne et al., Thereafter, a digital resampling module 634 of application 620 pre-processes the 2D images captured by visual inspection system 602, using the calibration data generated by calibration module 632, to spatially normalize/align the 2D images from the different cameras (e.g., with pixel-level precision), cameras have different angles of inclination relative to the vessel, [0054], generating calibration data (e.g., correction factors/matrices) that will be used to align the 2D images captured by visual inspection system 602 during the inspection process, [0055], In some embodiments, calibration module 632 (or another module or application) also facilitates a validation stage at which it is determined whether the calibration data generated for the various cameras properly aligns the images. In one such embodiment where images of three cameras are calibrated (e.g., cameras 302a through 302c), a monochrome calibration image from each camera is input to each color channel (i.e., red, green and blue channels) of a composite RGB image, [0061], By using 3D look-up tables, for example, the system can be expanded to any arbitrary number of cameras, at any orientation relative to the vessel/sample (e.g. at points around a sphere that is centered on the vessel). Because the look-up-tables are pre-calculated, adding cameras at different angles, potentially with different lenses, does not necessarily add a substantial computational burden when inspecting any given sample lot, [0073], Further, a full 3D ray optics approach, along with knowledge of the true orientation of the vessel shape relative to the vertical axis, can allow for full correction of any spatial alignment errors in the images of different cameras (even, in some embodiments, if the software-based calibration techniques described above are not implemented), [0074], At block 902, at least three 2D images of a sample in a vessel (e.g., vessel 400 or 500) are captured by at least three cameras located at different positions around the vessel (e.g., cameras 302a through 302d, or cameras 302b through 302d, of FIG. 3). The optical axis of a first one of the cameras is inclined or declined at a first angle (greater than or equal to 0 degrees) relative to the horizontal plane, and the optical axis of a second one of the cameras is inclined or declined at a second angle relative to the horizontal plane, with the second angle being at least five degrees greater than the first angle., [0077]).
Regarding claim 18, Piana et al. and Duan et al. and Milne et al. disclose the apparatus according to claim 17. Milne et al. further indicate the processor is further caused to: extract product feature information corresponding to the product object in each of the image frames; and perform image correction on the image frames according to the product feature information corresponding to each of the image frames to obtain the captured images corresponding to each of the preset capture angles (In some embodiments, VIS control module 630 causes visual inspection system 602 to perform certain calibration-related procedures, such as capturing 2D calibration images, and a calibration module 632 of application 620 processes the calibration images to generate calibration data (e.g., correction factors, matrices, etc.). Thereafter, a digital resampling module 634 of application 620 pre-processes the 2D images captured by visual inspection system 602, using the calibration data generated by calibration module 632, to spatially normalize/align the 2D images from the different cameras (e.g., with pixel-level precision), [0054], In this calibration procedure, VIS control module 630 causes the cameras of visual inspection system 602 (e.g., cameras 302a through 302d) to sequentially capture respective 2D images of the vessel under backlit conditions. The vessel (e.g., held or supported by fixture 306) may be empty or filled with a sample (e.g., a liquid drug product). In some embodiments, for example, calibration procedures are repeated for each new vessel/sample, in which case a sample will be present in the vessel during each iteration thereof, [0055], “Calibration module 632 processes each of the 2D calibration images from the various cameras to determine at least a horizontal offset of the image, a vertical offset of the image, and a rotation of the image… In other embodiments, calibration module 632 compares the locations of the detected edges to the locations of edges detected in a different one of the calibration images, which is used as a reference image. For example, calibration module 632 may compare the locations of edges detected in the calibration images obtained by cameras 302b through 302d with the locations of the edges detected in the calibration image obtained by camera 302a. Calibration module 632 may store the calibration data for each camera in memory unit 612 or another suitable location”, [0057], If the images are properly aligned, any non-transparent objects within the composite RGB image (including portions of the vessel, possibly) should appear white. In one embodiment, calibration module 632 causes the composite RGB image to be presented via a user interface shown on display 616, such that a human user can quickly confirm whether proper alignment has been achieved. Alternatively, calibration module 632 may process the composite RGB image to detect any red, blue, and/or green areas, and determine whether proper alignment has been achieved based on those areas (if any). For example, calibration module 632 may determine that the calibration procedure failed when detecting more than a threshold number of single- or two-colored pixels (possibly excluding areas where particles may reside in a sample), [0061], synchronize the 2D images captured by the multiple cameras, [0068], For each trajectory, application 620 may generate time-stamped measurements of particle size, particle shape, and/or other metrics, [0071], a full 3D ray optics approach, along with knowledge of the true orientation of the vessel shape relative to the vertical axis, can allow for full correction of any spatial alignment errors in the images of different cameras, [0074]) [vessel edges, particle trajectory - interpreted as “product feature information”]
Allowable Subject Matter
Claims 5-8 and 19-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following art is cited as relevant but not sufficient alone or in combination to disclose, teach, or fairly suggest the subject matter of the independent claims:
US 20220147752 A1: In an embodiment of the present invention, sources of the first image and the second image are not specifically defined. For example, the first image and the second image may be two images captured at different angles during a horizontal rotation process by the same camera having a partially overlapping field of view. Alternatively, the first image and the second image may be two images respectively captured by two cameras having a partially overlapping field of view. FIG. 5 shows a second block diagram of an image stitching apparatus 100 according to an embodiment of the present invention. Referring to FIG. 5, in one embodiment, the image stitching apparatus 100 further includes a difference calculation circuit 140, which is configured to calculate at least a difference matrix between the first image and the second image with respect to the overlapping area. Once the difference matrix is obtained, the determination circuit 120 is further configured to calculate the target stitching line using constraints of minimizing a difference between two sides of the stitching line and avoiding the motion area according to the matrix difference.
US 10776963 B2: FIG. 10D depicts an additional embodiment in which MRI imaging slices for a given tissue sample are taken at additional multiple different angles. In the embodiment of FIG. 10D, multiple imaging slices are taken at different angles radially about an axis in the z-plane. In other words, the image slice plane is rotated about an axis in the z-plane to obtain a large number of image slices. Each image slice has a different angle rotated slightly from an adjusted image slice angle.
1. A method comprising: receiving, by an image computing unit, image data from a sample, wherein the image data corresponds to two or more image datasets, and wherein each of the image datasets comprises a plurality of images; receiving selection, by the image computing unit, of at least two image datasets from the two or more image datasets; creating, by the image computing unit, three-dimensional (3D) matrices from each of the at least two image datasets that are selected, wherein a first 3D matrix is created from one or more images from a first of the at least two image datasets and a second 3D matrix is created from one or more images from a second of the at least two image datasets; refining, by the image computing unit, the 3D matrices; applying, by the image computing unit, one or more matrix operations to the refined 3D matrices to create a differential 3D matrix; receiving, by the image computing unit, selection of selected matrix columns of the refined 3D matrices and the 3D differential matrix, wherein the selected matrix columns of the refined 3D matrices and the 3D differential matrix correspond to a same portion of the sample; applying, by the image computing unit, a convolution algorithm to the selected matrix columns of the refined 3D matrices and the 3D differential matrix for creating a two-dimensional (2D) matrix; and applying, by the image computing unit, a reconstruction algorithm to the 2D matrix to create a super-resolution biomarker map (SRBM) image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671