Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1-10, 14-16, 34 and 43 are objected to because of the following informalities:
In claims 1-10, 14-16, 34 and 43 remove the numbers in bracket i.e., ((1), (10), (20), etc.)
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-3, 14-16, 32, 34, 40-43 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sekii (US 20230009925 A1).
Regarding claim 1, Sekii teaches a method for processing an image (see para [0001]; “object detection methods and object detection devices that detect a defined object from an image”) comprising the steps: - spatially assigning an n-dimensional grid (1) to a distribution of objects (20) in the n- dimensional space, wherein - n is greater or equal 2 (see para [0044]; “The trained AI model 20 outputs object estimation data for each width (W)×height (H) grid cell into which an input image is divided ..the input image divided into grid cells. In the example of FIG. 3, the input image is divided into an 8×6 grid of grid cells” Note: a 2D grid; n=2)- the distribution of the objects (20) is derivable from an object data set and the object data set is indicative for an arrangement of the objects (20) in an n-dimensional initial image (2) (see para [0080]; “a teacher signal are input, where the teacher signal is true values of position and size of object BB of the object to be detected in the training image along with position and size of four extreme point BB, and object class (one-hot class probability) of the object included in the object BB”, see also para[ 0081]; “Then training is advanced so that five errors are reduced for object estimation data for each grid cell detected by performing object detection on the input image” Note: teacher signals and per cell object estimation data (object dataset) indicates object arrangement in the initial image), the grid (1) comprises grid cells (10) and each grid cell (10) is associated with a cell position (P10) in the grid (10) (see para [0047]; “The object BB information consists of position (on X axis and Y axis) relative to the grid cells, size (X axis and Y axis), and confidence. Position relative to the grid cells is information indicating estimated position of an object BB, and indicates an upper left coordinate of the object BB when an upper left coordinate of the corresponding grid cell is taken as the origin. Size is information indicating size of an object BB, and indicates a lower right coordinate of the object BB when the upper left coordinate of the object BB is the origin” Note: defines coordinates per grid cell and ties each cell to a cell position), assigning objects (20) to grid cells (10) depending on the relative spatial arrangement between the objects (20) and the grid cells - (10) (see para [0075]; “the trained AI model 20 outputs W×H×(25+8) values of object estimation data.. The overlapping BB remover 30 classifies grid cells, removes object BB and extreme point BB of background grid cells (step S3), and also removes BB (object BB and extreme point BB) having a high degree of overlap with BB (object BB and extreme point BB) of grid cells having a higher confidence score”, see also para [0081]; “Then training is advanced so that five errors are reduced for object estimation data for each grid cell detected by performing object detection on the input image… The five errors are (1) error between detected position of the object BB and each extreme point BB of a grid cell where a center of the object BB of the teacher signal exists and position of a center of the object BB and each extreme point BB of the teacher signal” Note; assigning objects to the grid based on relative spatial layout (center in cell)).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
Sekii further teaches wherein - the object data set is indicative for the positions (P20) of the objects (20) in the n-dimensional initial image (2) (see para [0080]; “a teacher signal are input, where the teacher signal is true values of position and size of object BB of the object to be detected in the training image along with position and size of four extreme point BB, and object class (one-hot class probability) of the object included in the object BB”) - the objects (20) are assigned to the grid cells (10) depending on the positions (P20) of the objects (20) relative to the cell positions (P10) (see para [0079]; “According to YOLO, an input image is divided into S×S grid cells, and B BB are output for each grid cell”, see also para [0045]; “the input image divided into grid cells. In the example of FIG. 3, the input image is divided into an 8×6 grid of grid cells” and para [0050]; “This is calculated for each of W×H grid cells, and therefore the object estimation data output by the trained AI model 20 is W×H×(25+C) data values”, para [0081]; “the five errors are (1) error between detected position of the object BB and each extreme point BB of a grid cell where a center of the object BB of the teacher signal exists and position of a center of the object BB and each extreme point BB of the teacher signal”).
Regarding claim 3, the rejection of claim 2 is incorporated herein.
Sekii further teaches wherein - objects (20) are assigned to grid cells (10) on a one-to-one basis (see [0081]; “each extreme point BB of a grid cell where a center of the object BB of the teacher signal exists and position of a center of the object BB and each extreme point BB of the teacher signal”).
Regarding claim 14, the rejection of claim 1 is incorporated herein.
Sekii further teaches wherein - the object data set is indicative for one or more object features being characteristic for the objects (20) (see para [0039]; “A confidence score is calculated using confidence and class probability of object BB and extreme point BB included in the object estimation data”, see also para [0011]; “the object area with the extreme point area, thereby associating the feature point included in the extreme point area with an object feature point in the object area”).
Regarding claim 15, the rejection of claim 14 is incorporated herein.
Sekii further teaches further comprising - producing a grid data set by assigning one or more object features of the objects (20) to the grid cells (10) to which the objects (20) are assigned, - wherein the grid data set is indicative for the cell positions (P10) of all grid cells (10) of the grid (1) and indicative for which object features are assigned to which grid cell (10) (see para [0050]; “C values of class probability information for (5×5+C) values of object estimation data for each grid cell. This is calculated for each of W×H grid cells, and therefore the object estimation data output by the trained AI model 20 is W×H×(25+C) data values (third order tensor)”).
Regarding claim 16, the rejection of claim 15 is incorporated herein.
Sekii further teaches further comprising - producing an n-dimensional output image (3) depending on the grid data set, wherein the output image (3) is indicative for the initial image (2) and shows one or more object features at the respective cell positions (P10) (see para [0038]; “outputs object estimation data by evaluating an entire image once from an input image of defined size. The object estimation data includes data such as a BB (object (area) BB) that surrounds an object to be detected on an input image”, see also para [0077]; “the association unit 40 associates remaining object BB with extreme point BB (step S5), shapes the object BB based on positions of associated extreme point BB (step S6), and outputs the object BB after shaping and associated extreme point BB as an object detection result (step S7)”).
Regarding claim 32, the rejection of claim 15 is incorporated herein.
Sekii further teach a grid data set produced with the method (see para [0050]; “the trained AI model 20 outputs five values for each BB information (object BB information, first extreme point BB information, second extreme point BB information, third extreme point BB information, and fourth extreme point BB information) and C values of class probability information for (5×5+C) values of object estimation data for each grid cell. This is calculated for each of W×H grid cells, and therefore the object estimation data output by the trained AI model 20 is W×H×(25+C) data values”).
Regarding claim 34, the rejection of claim 16 is incorporated herein.
Sekii further teach an output image (3) produced with the method (see para [0050]; “This is calculated for each of W×H grid cells, and therefore the object estimation data output by the trained AI model 20 is W×H×(25+C) data values”).
Regarding claim 40, the rejection of claim 1 is incorporated herein.
Sekii further teach a device comprising means for carrying out the method (see para [0036]; “FIG. 1 is a block diagram illustrating structure of the object detection device 1. As illustrated, the object detection device 1 includes a camera 10, a trained artificial intelligence (AI) model 20, an overlapping BB remover 30, an association unit 40, and an object detection result storage 50”).
Regarding claim 41, the rejection of claim 1 is incorporated herein.
Sekii further teach a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method (see para [0042]; “a computer system comprising a microprocessor, read-only memory (ROM), random access memory (RAM), hard disk drive (HDD), and the like. A computer program loaded from the ROM or HDD is stored in the RAM, and the microprocessor realizes functions of the processing unit by operating according to the computer program on the RAM”).
Regarding claim 42, the rejection of claim 1 is incorporated herein.
Sekii further teach a computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method (see para [0042]; “a computer system comprising a microprocessor, read-only memory (ROM), random access memory (RAM), hard disk drive (HDD), and the like. A computer program loaded from the ROM or HDD is stored in the RAM, and the microprocessor realizes functions of the processing unit by operating according to the computer program on the RAM Here, a computer program is configured by combining instructions codes indicating commands to a computer in order to achieve a defined function. The object detection result storage 50 is realized by a storage such as an HDD”).
Regarding claim 43, the rejection of claim 32 is incorporated herein.
Sekii further teach a neural network trained with the grid data set and/or the output image (3) (see para [0044]; “the trained AI model 20 is a convolutional neural network that has undergone machine learning to detect an object such as a person, a dog, or a cow as an object class to be detected. The trained AI model 20 outputs object estimation data for each width (W)×height (H) grid cell into which an input image is divided”, see also para [0081]; “Then training is advanced so that five errors are reduced for object estimation data for each grid cell detected by performing object detection on the input image (parameters of the convolutional neural network are determined)”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Sekii in view of Pope (US 20150029186 A1).
Regarding claim 4, the rejection of claim 1 is incorporated herein.
Sekii further teach wherein, for assigning the objects (20) to grid cells (10), - a first assignment procedure is executed in which objects (20) are assigned to the respective closest grid cell (10) (see para [0047]; “Position relative to the grid cells is information indicating estimated position of an object BB, and indicates an upper left coordinate of the object BB when an upper left coordinate of the corresponding grid cell is taken as the origin” Note: cell relative output for each cell implying a positional association to a specific cell i.e., closest). However, Sekii does not teach and,- if, in the first assignment procedure, two or more objects (20) are to be assigned to the same grid cell (10), these objects (20) constitute conflicting objects (20c), the respective grid cell (10) constitutes a conflict grid cell (10c) and a conflict resolution procedure is executed in order to assign the conflicting objects (20c) to different grid cells (10) and/or to decide to not assign at least one of the conflicting objects (20c) to any grid cell (10).
In the same endeavor Pope teaches and,- if, in the first assignment procedure, two or more objects (20) are to be assigned to the same grid cell (10), these objects (20) constitute conflicting objects (20c), the respective grid cell (10) constitutes a conflict grid cell (10c) and a conflict resolution procedure is executed in order to assign the conflicting objects (20c) to different grid cells (10) and/or to decide to not assign at least one of the conflicting objects (20c) to any grid cell (10) (see para [0033]; “With reference to FIG. 2B, occupancy grid (204) can be utilized to determine if the location to which the data point is to be mapped is occupied by another data point. To designate occupancy and vacancy, available grid cells can be set to 0 and occupied grid cells can be set to 1. Referring again to FIG. 3, if the location to which the data point is to be mapped is occupied by another data point, at (308) the data point is discarded”, see also para [0034]; “Turning back to FIG. 3, in such cases in which the number of most recent points processed is equal to a predetermined number of points, at (312) an oldest point is removed from the number of recent points processed and its associated data point is no longer mapped to a location. When an oldest point is removed, that point's grid cell is marked as available. At (314), the data point to be mapped is mapped to the location determined at (304) and the point is added to queue of most recent points processed”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a detection method of detecting a defined object from an image, includes estimating, on the image, an extreme point area including a boundary feature point that satisfies a criterion related to a boundary of the object of Sekii in view of the use of method reducing a point cloud data set and mapped to a location to determine whether a different point's coordinates mapped to the location of Pope in order to resolve same cell conflicts by dropping duplicate assignments (see para [0033]).
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Sekii in view of Hoch et al. (US 20170300173 A1).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Sekii does not teach wherein - at least some objects (20) are assigned to grid cells (10) using the Hungarian algorithm, - an assignment matrix C is used with the values of the elements cij of the matrix C depending on the distance between the i-th object (20) to the j-th grid cell (10).
In the same field of endeavor Hoch et al. teach wherein - at least some objects (20) are assigned to grid cells (10) using the Hungarian algorithm, - an assignment matrix C is used with the values of the elements cij of the matrix C depending on the distance between the i-th object (20) to the j-th grid cell (10) (see para [0026]; “then determining the optimum assignment based on those distances. The assignment of measurements from the cost matrix may be made to individual objects using, for example, the Hungarian (Kuhn-Munkres) algorithm”, see also para [0006]; “a cost matrix is made up of the individual ‘costs’ for the assignment of each measured position to each object. The optimum assignment then can be determined by means of, e.g., the Hungarian (Kuhn-Munkres) algorithm”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a detection method of detecting a defined object from an image, includes estimating, on the image, an extreme point area including a boundary feature point that satisfies a criterion related to a boundary of the object of Sekii in view of the use of a position tracking system mapping a location of an object to a measurement using movement models of Hoch et al. in order to solve the combinatorial assignment problem of measured positions to objects (see para [0026]).
Regarding claim 6, the rejection of claim 5 is incorporated herein.
Hoch in the combination further teach wherein - the values of the elements cij are proportional to the squared distance between the i-th object (20) to j-the grid cell (10), - the distance between the i-th object (20) to j-th grid cell (10) is the Euclidean distance or the Manhattan distance (see para [0008]; “The cost for each assignment may be a simple metric like the Manhattan distance or the (squared) Euclidean distance between the newly measured touch positions and the recent (or predicted) positions of the contacts”).
Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Sekii in view of Pope as applied in claims 1, and 4 above, and further in view of Hoch et al.
Regarding claim 7, the rejection of claim 4 is incorporated herein.
Pope in the combination further teaches comprising the conflict grid cell (10c) and one or more selected grid cells (10) in the neighborhood of the conflict grid cell (10c) (see para [0033]; “With reference to FIG. 2B, occupancy grid (204) can be utilized to determine if the location to which the data point is to be mapped is occupied by another data point. To designate occupancy and vacancy, available grid cells can be set to 0 and occupied grid cells can be set to 1. Referring again to FIG. 3, if the location to which the data point is to be mapped is occupied by another data point, at (308) the data point is discarded”).
In the same field of endeavor Hoch et al. teach wherein - the conflict resolution procedure comprises the execution of the Hungarian algorithm in order to assign the conflicting objects (20c) to a cell-set (see para [0006]; “a cost matrix is made up of the individual ‘costs’ for the assignment of each measured position to each object. The optimum assignment then can be determined by means of, e.g., the Hungarian (Kuhn-Munkres) algorithm”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a detection method of detecting a defined object from an image, includes estimating, on the image, an extreme point area including a boundary feature point that satisfies a criterion related to a boundary of the object of Sekii in view of the use of a position tracking system mapping a location of an object to a measurement using movement models of Hoch et al. in order to reduce complexity and resolve conflicts locally (see para [0006]).
Regarding claim 8, the rejection of claim 7 is incorporated herein.
Sekii in the combination further teach wherein, when executing the conflict resolution process, - an object-set is determined comprising objects (20) which would have to be assigned to the grid cells (10) of the cell-set when executing the first assignment procedure (see para [0054]; “The overlapping BB remover 30 classifies each grid cell based on object estimation data output by the trained AI model 20… determines that a grid cell having a confidence score less than or equal to a defined threshold value (for example, 0.6) is a background grid cell that does not include an object”, see also para [0056]-[0058]; “the overlapping BB remover 30 removes object BB and each extreme point BB of grid cells determined to be background.. removes an object BB having a high degree of overlap with an object BB of a grid cell having a higher confidence score”).
Hoch et al. in the combination further teach- afterwards, only the objects (20) of the object-set are assigned to the grid cells (10) of the cell- set by using the Hungarian algorithm (see para [0006]; “a cost matrix is made up of the individual ‘costs’ for the assignment of each measured position to each object. The optimum assignment then can be determined by means of, e.g., the Hungarian (Kuhn-Munkres) algorithm”).
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Sekii and Pope in view of Hoch et al. as applied in claims 1, 4 ,7 and 8 above, and further in view of Emadi et al. (US 20210199792 A1).
Regarding claim 9, the rejection of claim 8 is incorporated herein.
Sekii in the combination further teach wherein - if the number k of objects (20) in the object-set is larger than the number m of grid cells (10) in the cell-set (see para [0054]; “The overlapping BB remover 30 classifies each grid cell based on object estimation data output by the trained AI model 20. The overlapping BB remover 30 calculates a confidence score for each grid cell, and determines that a grid cell having a confidence score less than or equal to a defined threshold value (for example, 0.6) is a background grid cell that does not include an object. The overlapping BB remover 30 determines that grid cells other than background grid cells are grid cells of an object class having a highest-class probability” Note: multiple BBs can remain around nearby cells, creating contention). However, the combination of Sekii, Pope and Hoch et al. as a whole does not teach the neighborhood of the conflict grid cell (10c) is increased in order to increase the amount of selected grid cells (10) until m is greater or equal k or until m reaches a predetermined maximum value m max.
In the same field of endeavor Emadi et al. teaches the neighborhood of the conflict grid cell (10c) is increased in order to increase the amount of selected grid cells (10) (see para [0005]; “In response to determining that the tracking initialization condition is not satisfied, a second gating size is adaptively determined based at least in part on the first movement parameter. Additional sensor data may be captured by the sensor unit at a second time using the second gating size”, see also para [0058]; “Then at step 515, an adaptive gating size is computed based on the displacement and Doppler change, e.g., according to Eq. (1). At step 517, the radar unit may continue obtaining sensor data with the adaptive gating size”) until m is greater or equal k or until m reaches a predetermined maximum value m max (see para [0061]; “At step 522, the radar unit may determine whether a termination condition is met. For example, the termination module 314 shown in FIG. 4B may determine whether any of the termination conditions is met. At step 524, when the termination condition is met, e.g., the radar unit has missed observations of the target point for at least a minimum number of consecutive scans, and the radar unit has produced a number of scans during the tracking stage, method 500 proceeds to step 526 to terminate the track”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a detection method of detecting a defined object from an image, includes estimating, on the image, an extreme point area including a boundary feature point that satisfies a criterion related to a boundary of the object of Sekii in view of the use of a position tracking system mapping a location of an object to a measurement using movement models of Hoch et al. and an adaptive gating mechanism for radar tracking initialization and determine whether the doppler and displacement parameters satisfy an initialization constraint of Emadi et al. in order predictable control logic to ensure feasible one to one assignment (see para [0061]).
Regarding claim 10, the rejection of claim 9 is incorporated herein.
Sekii in the combination further teach wherein - mmax is smaller than the total number of grid cells (10) in the grid (1)- m max is at most 100 or at most 49 (see para [0051]; “FIG. 5 is an example diagram illustrating position of object BB of each grid cell in object estimation data output from an input image. As illustrated, W×H (8×6 in this example) object BB are output. Similarly, for each extreme point BB, W×H are output” Note: it was well known to use 7x7= 49 grid cells in one shot detectors; 8x6=48 cells essentially the same order of magnitude as 49).
Claim 44 is rejected under 35 U.S.C. 103 as being unpatentable over Sekii in view of Yu (US 20190147592 A1).
Regarding claim 44, the rejection of claim 32 is incorporated herein. Sekii does not teach a method for diagnosing a disease in a patient comprising the step of: comparing a grid data set and/or an output image (3) of a biological sample of a patient with at least one reference grid data set and/or at least one reference output image (3) of a biological sample of a reference subject wherein this comparison allows diagnosing a disease in the patient.
In the same field of endeavor Yu teach a method for diagnosing a disease in a patient comprising the step of: comparing a grid data set and/or an output image (3) of a biological sample of a patient with at least one reference grid data set and/or at least one reference output image (3) of a biological sample of a reference subject wherein this comparison allows diagnosing a disease in the patient (see claim 15; “the digital image comprising an m×n array of pixels, where m and n are non-zero integers; electronically partitioning the digital image into a grid of identically sized, overlapping subimages thereof, each of the subimages comprising an o×p array of pixels that includes pixels from a subimage adjacent thereto along a single overlap dimension, where o and p are non-zero integers and o×pm×n≤0.01; computing an average pixel density for each of the subimages; selecting q subimages with densities higher than the densities of the other subimages, where q is a non-zero integer equal to or greater than 10; using a neural network to classify the selected subimages among pathology states; and aggregating the subimage classifications and classifying tissue among a plurality of types within the pathology states”, see also para [0009]; “he binary outcome classes may be one or more of (i) benign vs. malignant or (ii) lung adenocarcinoma vs. lung squamous cell carcinoma” Note; comparing a patient sample’s grid based representation against learned/reference distributions to determine benign vs malignant or subtype (a diagnostic determination). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a detection method of detecting a defined object from an image, includes estimating, on the image, an extreme point area including a boundary feature point that satisfies a criterion related to a boundary of the object of Sekii in view of the use of histological images preprocessed and classified among pathology states using a neural network of Yu to yield predictable results of pathology slides using grid-tiled inputs (see claim 15).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677