DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s amendment filed 10/14/2025 has been entered and made of record. No New Claim were added. No Claims were cancelled. Claims 1-20 are pending.
Applicant's arguments filed 10/14/2025 have been fully considered but they are not persuasive.
The applicant argues on page 5 of the remarks filed that the cited prior art of Nechiporenko et al. US PG-Pub(US 20120106787 A1) in view Ikushima et al US PG-Pub(US 20140078498 A1) would not disclose storing a geographic location of the current attribute of the item boundary, and a geographic location of a reference attribute of the item boundary. The Examiner disagrees as the prior art of Ikushima et al. US PG-Pub(US 20140078498 A1) is focused on determining product defects by comparing images that are captured to a previously captured image from the database as disclosed in ¶[0092]. Ikushima would disclose storing a geographic location of the current attribute of the item boundary in ¶[0182], “the CPU 22 stores a position of the inspection region 127 set at that time and a model image encircled with the inspection region 127 in the memory.”, in this section of the prior art a position is stored in relationship to the image captured and it is stored in the memory of the device a geographic location of a reference attribute of the item boundary in ¶[0249] -¶[0250] as the prior art discloses in this section storing the coordinate data of the reference point where there is a defect on the object and ¶[0155] further discloses determining the exact position of where a defect occurs in the image when comparing the captured object to a model image. Thus, the applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8 and 11-18 are rejected under 35 U.S.C. 103 as being unpatentable over Nechiporenko et al. US PG-Pub(US 20120106787 A1) in view Ikushima et al US PG-Pub(US 20140078498 A1).
Regarding Claim 1, The combination of Nechiporenko and Ikushima teaches a method in a computing device of detecting boundary deformation for an item ([0077], “Thus, apparatus 100 can be operated on a typical logo such as logo 700 of FIG. 7 to determine a geometric representation 702 of the logo and to determine a property of the logo such as the shape of circular edge 706 or one (or more) of the co-ordinates 704a, 704b, 704c of the logo (or the geometric representation 702 of the logo). Again, similar techniques can be applied to shape recognition for elements of an item in/on a goods package. Indeed, the item in/on the goods package may be considered an element itself.”, as disclosed in ¶[0077], the prior art uses edge detection to determine the geometric representation of the item.), the method comprising: obtaining a current image of the item([0059], “The series 121 of images are received at apparatus 100 by conventional means such as an i/o port/module and, optionally, stored in memory 108. Apparatus 100 is configured under control of the processor 102 to extract element data from the goods package elements in the series 121 of images.”, ¶[0059] disclosed a processor receiving a series of images of package goods.); obtaining, from the current image, a current attribute of a boundary of the item ([0065] “Turning now to FIG. 2, operation of element extraction module 110 is discussed in greater detail. The discussion is given in the context of the goods package being a goods carton made of cardboard, plastic or similar material, but the techniques are applicable to all types of goods packages. An acquired image of a side 200 of a goods package is illustrated. Visible in the image are labels 202 and 204, a vendor logo 206, handling/shipping marks 208 and barcode 210. Label 202 has co-ordinates 212a, 212b, 212c, 212d located at the four corners of the label.”, as disclosed in Fig. 2 and ¶[0065] the prior art extracts elements from the package based on the boundary points.); the reference attribute of the item boundary corresponding to a reference image of the item in an initial state and retrieving the reference attribute of the item boundary(¶[0089] discloses that each package data is extracted in the original image. ¶[0099] discloses using the extracted data in the original to a different image to see the differences between the package.); comparing the current attribute of the item boundary to the reference attribute of the item boundary ([0089], ”Each package's data after that may be compared by a comparator in for, example, data post-processing module 117 which implements comparator functionality similar to that described with reference to FIG. 14 below, but in accordance with rules set by templates for logo type and position (say, one rule for TV, another for fridges etc.). If all data correlates, and sufficient information (i.e. Part Number, Serial Number etc.) is available for each package, a result for each package is sent to the database by data definition/extraction module 118. If not, an alarm will be sent to the remote operator/local operator, detailing the package position on pallet mentioned, to overturn or to make a manual entry into system”, as disclosed in this section of the prior art a comparison is performed based on two different instances of the package data and an alert is sent if the package does not meet a requirement.) and determining, based on the comparison, whether the item boundary is deformed relative to the initial state of the item (¶[0083], “Apparatus 100 then is able to refine the preliminary grid to a confirmed grid and shifts grid lines 908a, 910a to lines 908b, 910b to define the data grid. Significant elements, including recognised labels, logos and barcodes, and (if any) damage within each goods package boundaries defined by the lines of the data grid (in pixels) are associated with the a particular goods package. For instance, in the data model depicted by 900, label 904 and logo 912 are associated with goods package 902.”, ¶[0083] discloses a requirement for a package to pass is whether there are any damages or not to the package.) and when the item boundary is deformed, generating an exception notification ([0118], “Apparatus 100 is able to flag, for a user attention, a candidate character sting which does not satisfy the comparison criterion. Thus, if the LDx is greater than a predefined threshold, apparatus 100 determines that the decoded word is not in the DAW and flags it as a `Special String`. This special string could, for example, be a serial number or part number and could be useful in resolving damaged barcodes.”, as disclosed in ¶[0118], an alert to the user is sent if the package barcode is damaged. ).
Nechiporenko does not explicitly teach storing a geographic location of the current attribute of the item boundary, and a geographic location of a reference attribute of the item boundary, the reference attribute of the item boundary corresponding to a reference image of the item in an initial state and the current attribute of the item boundary corresponding to an image of the item in a current state.
Ikushima teaches storing a geographic location of the current attribute of the item boundary, (¶[0182], “the CPU 22 stores a position of the inspection region 127 set at that time and a model image encircled with the inspection region 127 in the memory.”, in this section of the prior art a position is stored in relationship to the image captured and it is stored in the memory of the device)
and a geographic location of a reference attribute of the item boundary (¶[0249] ,”In FIG. 62, an image recognition section 322 compares a model image showing part of a non-defective product of the inspection object 8 set through the UI control section 310 to the image of the inspection object 8 flowing on the line acquired by the camera 4, and specifies a portion consistent with the model image through pattern recognition. The image recognition section 322 searches for the model image in a range of a preset search region. An image region having the same size as the model image becomes an inspection region. The center of the inspection region consistent with the model image serves as the reference point. The reference point may be finely adjusted through the UI control section. Coordinate data of the search region or coordinate data of the reference point (detection point) is stored in the characteristic data storage section”, as disclosed in ¶[0249] the prior art stores the coordinate data of the reference point where there is a defect on the object.), the reference attribute of the item boundary corresponding to a reference image of the item in an initial state and the current attribute of the item boundary corresponding to an image of the item in a current state. ([0116] “Pattern search measurement tool: A position, an inclined angle, and a correlation value of an image pattern are measured by pre-storing an image pattern (model image) to be compared in a storage device and detecting a portion similar to the stored image pattern from images of the imaged inspection object 8. [0117] Scratch measurement tool: An average density value of pixel values is calculated by moving a small region (segment) within a set inspection region, and a scratch is determined to be present at a position having a density difference of a threshold value or more.”, as disclosed in ¶[0116]-¶[0117], the prior art compares the captured image to a prestored image and determines if there is a defect by using the pixel data of an inspection region of the image.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko with Ikushima in order to store information such as the location of the object in a database. One skilled in the art would have been motivated to modify Nechiporenko in this manner in order for inspecting an appearance of a product by imaging the product. (Ikushima, ¶[0003])
Regarding Claim 2, the combination of Nechiporenko and Ikushima teach the method of claim 1, where Nechiporenko further teaches wherein retrieving the reference attribute includes extracting the reference attribute from a machine-readable indicium affixed to the item. ([0072] “Part of the element data extraction process may include apparatus 100 performing OCR techniques to extract the human-readable alpha-numeric characters on the label and conventional techniques to read the label barcodes for use in the data modelling.”, ¶[0072], discloses using OCR on the image to read barcode labels.)
Regarding Claim 3, the combination of Nechiporenko and Ikushima teachesthe method of claim 2, where Nechiporenko further teaches wherein the machine-readable indicium includes at least one of a barcode and an RFID tag. ([0072] “Part of the element data extraction process may include apparatus 100 performing OCR techniques to extract the human-readable alpha-numeric characters on the label and conventional techniques to read the label barcodes for use in the data modelling.”, ¶[0072], discloses using OCR on the image to read barcode labels.)
Regarding Claim 4, the combination of Nechiporenko and Ikushima teaches the method of claim 1, where Nechiporenko further teaches further comprising, prior to obtaining the current image, generating the reference attribute by: obtaining a reference image of the item and deriving the reference attribute from the image. ([0065], “Turning now to FIG. 2, operation of element extraction module 110 is discussed in greater detail. The discussion is given in the context of the goods package being a goods carton made of cardboard, plastic or similar material, but the techniques are applicable to all types of goods packages. An acquired image of a side 200 of a goods package is illustrated. Visible in the image are labels 202 and 204, a vendor logo 206, handling/shipping marks 208 and barcode 210. Label 202 has co-ordinates 212a, 212b, 212c, 212d located at the four corners of the label.”)
Regarding Claim 5, the combination of Nechiporenko and Ikushima teach the method of claim 4, where Nechiporenko further teaches further comprising: storing the reference attribute for subsequent retrieval. ([0132], “The consignee label image is stored until the full data from the shipment (including other packages/pallets) is checked. Data from "consignee" part of other labels is counted by symbols, and for every symbol, the percentage of presence is calculated (i.e. which percentage of "consignee" part has that symbol, in alphabetic order).”, ¶[0132] discloses storing information of the package.)
Regarding Claim 6, the combination of Nechiporenko and Ikushima teach the method of claim 5, where Nechiporenko further teaches wherein storing the reference attribute includes: encoding the reference attribute in a machine-readable indicium configured to be affixed to the item. ([0127], “In one implementation of this, a human-readable character string for the barcode captured in an OCR process is compared with a corresponding barcode. In practice, a common situation is a barcode does 1432 not have a corresponding or associated human-readable character string 1434, containing the (say) serial number"**********`; and the OCR string, containing something like "S/N:***********". In fact, the two strings may be not even on the same label. However, apparatus 100 has one or more templates describing both possible strings and how to evaluate them, and the apparatus 100 will still, therefore, be able to compare the barcode and the character string when they belong to the same goods package.”, ¶[0127], discloses using OCR to decode the barcode information in the image)
Regarding Claim 7, The combination of Nechiporenko and Ikushima teach the method of claim 1, where Nechiporenko further teaches further comprising: storing the current attribute in association with the reference attribute and an identifier of the item. ([0130] The comparator of FIG. 14c may be provided in a separate apparatus (not illustrated), in which case an apparatus for analysing a barcode read from an image of a goods package comprises a processor and a memory for storing one or more routines. When executed under control of the processor, the routines control the apparatus: to determine a barcode distance between the barcode and a barcode-related character string from a comparison of a set of character values for the barcode and a set of character values for the barcode-related character string; and to determine, from the comparison, whether the barcode distance satisfies a barcode comparison criterion.)
Regarding Claim 8, The combination of Nechiporenko and Ikushima teach the method of claim 7, where Nechiporenko further teaches wherein storing the current attribute includes encoding the current attribute in a machine-readable indicium configured to be affixed to the item. ([0134] In another example, apparatus 100 checks shipment number 123. After a referral to a shipping database, apparatus 100 determines the shipment is a shipment of, DELL.TM. products on the package it should be written "Consignee: Azimuth". Apparatus 100 has only recognized "Consignee: Azimut" form the OCR process which does not match the expectation and would, otherwise, cause an error.)
Regarding Claim 11, Nechiporenko teaches a computing device, comprising: a memory; and a processor([0053], “Turning first to FIG. 1 apparatus 100 comprises a microprocessor 102 and a memory 104 for storing routines 106. The microprocessor 102 operates to execute the routines 106 to control operation of the apparatus 100 as will be described in greater detail below”, as disclosed in ¶[0053], the apparatus comprises a processor and memory.) configured to: obtain a current image of the item([0059], “The series 121 of images are received at apparatus 100 by conventional means such as an i/o port/module and, optionally, stored in memory 108. Apparatus 100 is configured under control of the processor 102 to extract element data from the goods package elements in the series 121 of images.”, ¶[0059] disclosed a processor receiving a series of images of package goods.); obtain, from the current image, a current attribute of a boundary of the item ([0065] “Turning now to FIG. 2, operation of element extraction module 110 is discussed in greater detail. The discussion is given in the context of the goods package being a goods carton made of cardboard, plastic or similar material, but the techniques are applicable to all types of goods packages. An acquired image of a side 200 of a goods package is illustrated. Visible in the image are labels 202 and 204, a vendor logo 206, handling/shipping marks 208 and barcode 210. Label 202 has co-ordinates 212a, 212b, 212c, 212d located at the four corners of the label.”, as disclosed in Fig. 2 and ¶[0065] the prior art extracts elements from the package based on the boundary points.); the reference attribute corresponding to the item in an initial state and retrieve the reference attribute of the item boundary, (¶[0089] discloses that each package data is extracted in the original image. ¶[0099] discloses using the extracted data in the original to a different image to see the differences between the package.); compare the current attribute to the reference attribute([0089], ”Each package's data after that may be compared by a comparator in for, example, data post-processing module 117 which implements comparator functionality similar to that described with reference to FIG. 14 below, but in accordance with rules set by templates for logo type and position (say, one rule for TV, another for fridges etc.). If all data correlates, and sufficient information (i.e. Part Number, Serial Number etc.) is available for each package, a result for each package is sent to the database by data definition/extraction module 118. If not, an alarm will be sent to the remote operator/local operator, detailing the package position on pallet mentioned, to overturn or to make a manual entry into system”, as disclosed in this section of the prior art a comparison is performed based on two different instances of the package data and an alert is sent if the package does not meet a requirement.) and determine, based on the comparison, whether the item boundary is deformed relative to the initial state of the item (¶[0083], “Apparatus 100 then is able to refine the preliminary grid to a confirmed grid and shifts grid lines 908a, 910a to lines 908b, 910b to define the data grid. Significant elements, including recognised labels, logos and barcodes, and (if any) damage within each goods package boundaries defined by the lines of the data grid (in pixels) are associated with the a particular goods package. For instance, in the data model depicted by 900, label 904 and logo 912 are associated with goods package 902.”, ¶[0083] discloses a requirement for a package to pass is whether there are any damages or not to the package.) and when the item boundary is deformed, generate an exception notification ([0118], “Apparatus 100 is able to flag, for a user attention, a candidate character sting which does not satisfy the comparison criterion. Thus, if the LDx is greater than a predefined threshold, apparatus 100 determines that the decoded word is not in the DAW and flags it as a `Special String`. This special string could, for example, be a serial number or part number and could be useful in resolving damaged barcodes.”, as disclosed in ¶[0118], an alert to the user is sent if the package barcode is damaged. ).
Nechiporenko does not explicitly teach storing a geographic location of the current attribute of the item boundary, and a geographic location of a reference attribute of the item boundary, the reference attribute of the item boundary corresponding to a reference image of the item in an initial state and the current attribute of the item boundary corresponding to an image of the item in a current state.
Ikushima teaches storing a geographic location of the current attribute of the item boundary, and a geographic location of a reference attribute of the item boundary(¶[0249] ,”In FIG. 62, an image recognition section 322 compares a model image showing part of a non-defective product of the inspection object 8 set through the UI control section 310 to the image of the inspection object 8 flowing on the line acquired by the camera 4, and specifies a portion consistent with the model image through pattern recognition. The image recognition section 322 searches for the model image in a range of a preset search region. An image region having the same size as the model image becomes an inspection region. The center of the inspection region consistent with the model image serves as the reference point. The reference point may be finely adjusted through the UI control section. Coordinate data of the search region or coordinate data of the reference point (detection point) is stored in the characteristic data storage section”, as disclosed in ¶[0249] the prior art stores the coordinate data of the reference point where there is a defect on the object.), the reference attribute of the item boundary corresponding to a reference image of the item in an initial state and the current attribute of the item boundary corresponding to an image of the item in a current state. ([0116] “Pattern search measurement tool: A position, an inclined angle, and a correlation value of an image pattern are measured by pre-storing an image pattern (model image) to be compared in a storage device and detecting a portion similar to the stored image pattern from images of the imaged inspection object 8. [0117] Scratch measurement tool: An average density value of pixel values is calculated by moving a small region (segment) within a set inspection region, and a scratch is determined to be present at a position having a density difference of a threshold value or more.”, as disclosed in ¶[0116]-¶[0117], the prior art compares the captured image to a prestored image and determines if there is a defect by using the pixel data of an inspection region of the image.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko with Ikushima in order to store information such as the location of the object in a database. One skilled in the art would have been motivated to modify Nechiporenko in this manner in order for inspecting an appearance of a product by imaging the product. (Ikushima, ¶[0003])
Regarding Claim 12, the combination of Nechiporenko and Ikushima teach the computing device of claim 11, where Nechiporenko further teaches wherein the processor is configured, to retrieve the reference attribute, to extract the reference attribute from a machine-readable indicium affixed to the item. ([0072] “Part of the element data extraction process may include apparatus 100 performing OCR techniques to extract the human-readable alpha-numeric characters on the label and conventional techniques to read the label barcodes for use in the data modelling.”, ¶[0072], discloses using OCR on the image to read barcode labels.)
Regarding Claim 13, the combination of Nechiporenko and Ikushima teach the computing device of claim 12, where Nechiporenko further teaches wherein the machine-readable indicium includes at least one of a barcode and an RFID tag. ([0072] “Part of the element data extraction process may include apparatus 100 performing OCR techniques to extract the human-readable alpha-numeric characters on the label and conventional techniques to read the label barcodes for use in the data modelling.”, ¶[0072], discloses using OCR on the image to read barcode labels.)
Regarding Claim 14, the combination of Nechiporenko and Ikushima teach the computing device of claim 11, where Nechiporenko further teaches wherein the processor is configured, prior to obtaining the current image, to generate the reference attribute by: obtaining the reference image of the item; and deriving the reference attribute from the image. ([0065], “Turning now to FIG. 2, operation of element extraction module 110 is discussed in greater detail. The discussion is given in the context of the goods package being a goods carton made of cardboard, plastic or similar material, but the techniques are applicable to all types of goods packages. An acquired image of a side 200 of a goods package is illustrated. Visible in the image are labels 202 and 204, a vendor logo 206, handling/shipping marks 208 and barcode 210. Label 202 has co-ordinates 212a, 212b, 212c, 212d located at the four corners of the label.”)
Regarding Claim 15, the combination of Nechiporenko and Ikushima teach the computing device of claim 14, where Nechiporenko further teaches wherein the processor is further configured to: store the reference attribute for subsequent retrieval. ([0132], “The consignee label image is stored until the full data from the shipment (including other packages/pallets) is checked. Data from "consignee" part of other labels is counted by symbols, and for every symbol, the percentage of presence is calculated (i.e. which percentage of "consignee" part has that symbol, in alphabetic order).”, ¶[0132] discloses storing information of the package.)
Regarding Claim 16, the combination of Nechiporenko and Ikushima teach the computing device of claim 15, where Nechiporenko further teaches wherein, to store the reference attribute the processor is further configured, to: encode the reference attribute in a machine-readable indicium configured to be affixed to the item. ([0127], “In one implementation of this, a human-readable character string for the barcode captured in an OCR process is compared with a corresponding barcode. In practice, a common situation is a barcode does 1432 not have a corresponding or associated human-readable character string 1434, containing the (say) serial number"**********`; and the OCR string, containing something like "S/N:***********". In fact, the two strings may be not even on the same label. However, apparatus 100 has one or more templates describing both possible strings and how to evaluate them, and the apparatus 100 will still, therefore, be able to compare the barcode and the character string when they belong to the same goods package.”, ¶[0127], discloses using OCR to decode the barcode information in the image)
Regarding Claim 17, the combination of Nechiporenko and Ikushima teach the computing device of claim 11, where Nechiporenko further teaches wherein the processor is further configured to: store the current attribute in association with the reference attribute and an identifier of the item. ([0130] The comparator of FIG. 14c may be provided in a separate apparatus (not illustrated), in which case an apparatus for analysing a barcode read from an image of a goods package comprises a processor and a memory for storing one or more routines. When executed under control of the processor, the routines control the apparatus: to determine a barcode distance between the barcode and a barcode-related character string from a comparison of a set of character values for the barcode and a set of character values for the barcode-related character string; and to determine, from the comparison, whether the barcode distance satisfies a barcode comparison criterion.)
Regarding Claim 18, the combination of Nechiporenko and Ikushima teach the computing device of claim 17, where Nechiporenko further teaches wherein, to store the reference attribute the processor is further configured, to encode the current attribute in a machine-readable indicium configured to be affixed to the item. ([0134] In another example, apparatus 100 checks shipment number 123. After a referral to a shipping database, apparatus 100 determines the shipment is a shipment of, DELL.TM. products on the package it should be written "Consignee: Azimuth". Apparatus 100 has only recognized "Consignee: Azimut" form the OCR process which does not match the expectation and would, otherwise, cause an error.)
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nechiporenko et al. US PG-Pub(US 20120106787 A1) in view Ikushima et al US PG-Pub(US 20140078498 A1) in view of Peeterss et al. US PG-Pub(US 20230053393 A1).
Regarding Claim 9, while the combination of Nechiporenko and Ikushima teach the method of claim 1, they do not explicitly teach wherein the exception notification includes a control signal to an item handling apparatus to redirect the item.
Peeters teaches wherein the exception notification includes a control signal to an item handling apparatus to redirect the item. (¶[0122], In the event that the images of the sample indicate that the sample has a first type of defect, then the sample is routed to a first bin. In the event that the images of the sample indicate that the sample has a second type of defect, then the sample is routed into a second bin. Alternatively, in the event that the images of the sample indicate that the sample does not have any defects, then the sample is routed to a third bin. The sorting machine can route the samples using various different methods. A first method of routing includes using a burst of air to redirect the trajectory of a sample as it falls into the collector bin. A second method of routing includes using a mechanically controlled flap to redirect the trajectory of a sample as it falls into the collector bin.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko and Ikushima with Peeters in order to redirect the item when there is a defect. One skilled in the art would have been motivated to modify Nechiporenko and Ikushima in this manner in order to improve sorting compared to unreliable sorting by human hands without the cost of replacing an entire processing line. (Peeters, ¶[0220])
Regarding Claim 19, while the combination of Nechiporenko and Ikushima teach the computing device of claim 11, they do not explicitly teach wherein the exception notification includes a control signal to an item handling apparatus to redirect the item.
Peeters teaches wherein the exception notification includes a control signal to an item handling apparatus to redirect the item. (¶[0122], In the event that the images of the sample indicate that the sample has a first type of defect, then the sample is routed to a first bin. In the event that the images of the sample indicate that the sample has a second type of defect, then the sample is routed into a second bin. Alternatively, in the event that the images of the sample indicate that the sample does not have any defects, then the sample is routed to a third bin. The sorting machine can route the samples using various different methods. A first method of routing includes using a burst of air to redirect the trajectory of a sample as it falls into the collector bin. A second method of routing includes using a mechanically controlled flap to redirect the trajectory of a sample as it falls into the collector bin.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko and Ikushima with Peeters in order to redirect the item when there is a defect. One skilled in the art would have been motivated to modify Nechiporenko and Ikushima in this manner in order to improve sorting compared to unreliable sorting by human hands without the cost of replacing an entire processing line. (Peeters, ¶[0220])
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nechiporenko et al. US PG-Pub(US 20120106787 A1) in view Ikushima et al US PG-Pub(US 20140078498 A1) in view of Shtrom et al. US PG-Pub(US 20190377814 A1).
Regarding Claim 10, while the combination of Nechiporenko and Ikushima teach the method of claim 1, they do not explicitly teach wherein the reference attribute includes at least one of:a dimension of the item; a number of edges of the item detected from the image; and a keypoint descriptor extracted from the image.
Shtrom teaches wherein the reference attribute includes at least one of:a dimension of the item; a number of edges of the item detected from the image; and a keypoint descriptor extracted from the image. (¶[0087], “wide variety of image-processing and classification techniques may be used to extract features from the optical image, such as one or more of: normalizing a magnification or a size of object 120, rotating object 122 to a predefined orientation, extracting the features that may be used to identify object 120, etc. Note that the extracted features may include: edges associated with one or more potential objects in the optical image, corners associated with the potential objects, lines associated with the potential objects, conic shapes associated with the potential objects, color regions within the optical image, and/or texture associated with the potential objects. In some embodiments, the features are extracted using a description technique, such as: scale invariant feature transform (SIFT), speed-up robust features (SURF), a binary descriptor (such as ORB), binary robust invariant scalable keypoints (BRISK), fast retinal keypoint (FREAK), etc. Furthermore, control engine 610 may apply one or more supervised or machine-learning techniques to the extracted features to identify/classify object 120.”, as disclosed in ¶[0087], the prior art is capable of extracting the dimension/size of an object, edges and the keypoints associated to an object when classifying the object.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko and Ikushima with Shtrom in order to extract dimensions, corner points and keypoints of an object. One skilled in the art would have been motivated to modify Nechiporenko and Ikushima in this manner in order to improve the ability to resolve structure of a three-dimensional object. (Shtrom, ¶[0071])
Regarding Claim 20, while the combination of Nechiporenko and Ikushima teach the computing device of claim 11, they do not explicitly teach wherein the reference attribute includes at least one of:a dimension of the item; a number of edges of the item detected from the image; and a keypoint descriptor extracted from the image.
Shtrom teaches wherein the reference attribute includes at least one of:a dimension of the item; a number of edges of the item detected from the image; and a keypoint descriptor extracted from the image. (¶[0087], “wide variety of image-processing and classification techniques may be used to extract features from the optical image, such as one or more of: normalizing a magnification or a size of object 120, rotating object 122 to a predefined orientation, extracting the features that may be used to identify object 120, etc. Note that the extracted features may include: edges associated with one or more potential objects in the optical image, corners associated with the potential objects, lines associated with the potential objects, conic shapes associated with the potential objects, color regions within the optical image, and/or texture associated with the potential objects. In some embodiments, the features are extracted using a description technique, such as: scale invariant feature transform (SIFT), speed-up robust features (SURF), a binary descriptor (such as ORB), binary robust invariant scalable keypoints (BRISK), fast retinal keypoint (FREAK), etc. Furthermore, control engine 610 may apply one or more supervised or machine-learning techniques to the extracted features to identify/classify object 120.”, as disclosed in ¶[0087], the prior art is capable of extracting the dimension/size of an object, edges and the keypoints associated to an object when classifying the object.)
It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Nechiporenko and Ikushima with Shtrom in order to extract dimensions, corner points and keypoints of an object. One skilled in the art would have been motivated to modify Nechiporenko and Ikushima in this manner in order to improve the ability to resolve structure of a three-dimensional object. (Shtrom, ¶[0071])
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAN HOANG/Examiner, Art Unit 2661