DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP-2022-181886, filed on 12/06/2023.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/30/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Status
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim(s) 1-2, 4 and 9-10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito et al (U.S. 20100034464 A1; Ito).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Iwamoto et al (U.S. 20110135203 A1; Iwamoto).
Claim(s) 5, 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Oami Ryoma (WO-2020217368 A1).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Oami Ryoma (WO-2020217368 A1), and in further view of Iwamoto et al (U.S. 20110135203 A1; Iwamoto).
Examiner Noted: Oami Ryoma (WO-2020217368 A1) – See PDF document provided by Examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claim(s) 1-20 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1, and similar rationale applies to independent Claim(s) 9 and 10. The rationale, under MPEP § 2106, for this finding is explained below:
The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria.
Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter?
When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a process since the claim is directed to a method.
Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception?
The Examiner interprets that the judicial exception applies since Claim 1 limitation of receiving an object image and extracting at least one feature of an object included in the received object image; and determining a value of M (the value of M is an integer equal to or larger than one but equal to or smaller than N), which is the number of object images whose features will be extracted of N (the value of N is an integer equal to or larger than two) object images with which a first tracking ID, which is an identifier allocated to one object, is associated, in accordance with a usage status of processing resources of the feature extraction apparatus. The claim is related to mental process by a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and/or performing a mental process in a computer environment. An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360 ; If the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a)(2), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two.
Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
The Examiner interprets that Claim 1 limitation does not provide additional elements or combination of additional elements to a practical application since the claim are performed by processor see MPEP 2106.05(g). or Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"). If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim.
Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception.
The Examiner interprets that the Claims do not amount to significantly more since the Claim(s) is/state Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)).
Furthermore, the generic computer components of the processor recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
Claims 2-8 are depending on the independent claim/s 1include all the limitation of the independent claim. The Examiner finds that Claim(s) 2-8 do/does not state significantly more since the claim only recites obtain analysis data and output the data information.
Thus, Claims 1-10 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more.
Therefore, the Examiner interprets that the claims are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 4 and 9-10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito et al (U.S. 20100034464 A1; Ito).
Regarding claim 1, Ito discloses A feature extraction apparatus (Paragraph 11: “ FIG. 1 shows a block diagram of an image processing apparatus 100”) comprising: at least one memory (Fig.1 : storage unit 150) configured to store instructions; and at least one processor (Fig.1: control unit 160) configured to execute, according to the instructions, (Paragraph 12: “the control unit 160 controls each unit of the image processing apparatus 100.”) a process comprising:
receiving an object image (Paragraph 17: “In step S310, the control unit 160 stores image sequence acquired by the acquisition unit 110 in the storage unit 150) and extracting at least one feature of an object included in an object image; (Paragraph 19: “In step S310, the control unit 160 stores image sequence acquired by the acquisition unit 110 in the storage unit 150. … In step S330, the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150.”) and
determining a value of M (the value of M is an integer equal to or larger than one but equal to or smaller than N), which is the number of object images whose features will be extracted of N (the value of N is an integer equal to or larger than two) object images (Figs. 1-3; Paragraph 11: “The feature selection unit 130 selects M (M is a positive integer smaller than N) feature extraction units from N feature extraction units”) Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof.”) with which a first tracking ID (the object tracking unit 140), which is an identifier allocated to one object, is associated, in accordance with a usage status of processing resources of the feature extraction apparatus. (Paragraph 23: “ In step S340, the object tracking unit 140 tracks object using M features extracted by M feature extraction units selected by the feature selection unit 130.”; Paragraph 11: “The object tracking unit 140 tracks the object using the M features extracted from the selected M (M is a positive integer smaller than N) feature extraction units.”)
Regarding claim 2, Ito discloses wherein the process further comprises: outputting object information including received information; and receiving the N object images and outputting, of the N object images that have been received, (Paragraphs 19-20: “In step S330, the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150 … Function f.sub.D is, for example, a classifier which separates pre-learned object for generating N feature extraction units from background thereof.”) the M object images whose features will be extracted to the process of the extracting, and outputting, of the N object images, (N-M) object images whose features will not be extracted other than the M object images whose features will be extracted, to the process of the outputting of object information. (Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof. The output of the unselected N-M feature extraction units is treated as 0 in the calculation of c.sub.”)
Regarding claim 4, Ito discloses the outputting of object information includes outputting the object information including an object image ID of each of the M object images and extracted information indicating that features of each of the M object images whose features will be extracted have been extracted (Paragraph 23: “In step S340, the object tracking unit 140 tracks object using M features extracted by M feature extraction units selected by the feature selection unit 130.”) and an object image ID of the (N- M) object images and unextracted information indicating features of each of the (N-M) object images whose features will not be extracted have not been extracted. (Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof. The output of the unselected N-M feature extraction units is treated as 0 in the calculation of c.sub.”)
Regarding claim 9, Ito discloses A method executed by a feature extraction apparatus (Paragraph 2: “an apparatus and a method which may speed up tracking of an object and improve robustness.”), the method comprising:
extracting at least one feature of an object included in an object image; (Paragraphs 17-19: “In step S310, the control unit 160 stores image sequence acquired by the acquisition unit 110 in the storage unit 150. … In step S330, the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150.”) and
determining a value of M (the value of M is an integer equal to or larger than one but equal to or smaller than N), which is the number of object images whose features will be extracted of N (the value of N is an integer equal to or larger than two) object images (Figs. 1-3; Paragraph 11: “The feature selection unit 130 selects M (M is a positive integer smaller than N) feature extraction units from N feature extraction units”) Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof.”) with which a first tracking ID (the object tracking unit 140), which is an identifier allocated to one object, is associated, in accordance with a usage status of processing resources of the feature extraction apparatus. (Paragraph 23: “ In step S340, the object tracking unit 140 tracks object using M features extracted by M feature extraction units selected by the feature selection unit 130.”; Paragraph 11: “The object tracking unit 140 tracks the object using the M features extracted from the selected M (M is a positive integer smaller than N) feature extraction units.”)
Regarding claim 10, Ito discloses A non-transitory computer readable medium storing a program (Fig.1 : storage unit 150) for causing a feature extraction apparatus (Paragraph 11: “ FIG. 1 shows a block diagram of an image processing apparatus 100”) to execute processing (Fig.1: control unit 160) (Paragraph 12: “the control unit 160 controls each unit of the image processing apparatus 100.”) comprising:
extracting at least one feature of an object included in an object image; (Paragraphs 17-19: “In step S310, the control unit 160 stores image sequence acquired by the acquisition unit 110 in the storage unit 150. … In step S330, the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150.”) and
determining a value of M (the value of M is an integer equal to or larger than one but equal to or smaller than N), which is the number of object images whose features will be extracted of N (the value of N is an integer equal to or larger than two) object images (Figs. 1-3; Paragraph 11: “The feature selection unit 130 selects M (M is a positive integer smaller than N) feature extraction units from N feature extraction units”) ; Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof.”) with which a first tracking ID (the object tracking unit 140), which is an identifier allocated to one object, is associated, in accordance with a usage status of processing resources of the feature extraction apparatus. (Paragraph 23: “ In step S340, the object tracking unit 140 tracks object using M features extracted by M feature extraction units selected by the feature selection unit 130.”; Paragraph 11: “The object tracking unit 140 tracks the object using the M features extracted from the selected M (M is a positive integer smaller than N) feature extraction units.”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Iwamoto et al (U.S. 20110135203 A1; Iwamoto).
Regarding claim 3, Ito discloses all the claims invention except wherein the process further comprises selecting, from P (the value of P is an integer equal to or larger than N) object images with which the first tracking ID is associated, the N object images based on P priorities associated with the respective P object images and outputting the N selected object images to the process of the sorting, and outputting (P-N) object images other than the N selected object images to the process of the outputting of object information.
Iwamoto discloses the process further comprises selecting, from P (the value of P is an integer equal to or larger than N) object images with which the first tracking ID is associated, (Paragraph 93-94: “ Next, the feature extraction parameter generation unit 12 generates, for each of the M types of features, a feature extraction parameter which is a parameter for extracting a feature from the image, and stores it in the feature extraction parameter storing unit 23 (S102).Then, the feature extraction section 131 of the feature extraction unit 13 extracts the M types of features from each of the original images in the original image storing unit 21 in accordance with the extraction parameters for the M types of features, and stores them in the feature storing unit 24 (S103).”) the N object images based on P priorities associated with the respective P object images and outputting the N selected object images to the process of the sorting, (Paragraph 95: “ the feature selection unit 14 receives the M types of features of the original images and the altered images stored in the feature storing unit 24, … and selects N types of features from the M types of features, with the discrimination capability which is a degree of discriminating different images and the robustness which is a degree that the value of a feature does not vary due to an alteration process applied to an image being evaluation criteria. and outputs them (S105) “) and outputting (P-N) object images other than the N selected object images to the process of the outputting of object information. (Paragraph 98: “Then, the feature selection unit 14 judges whether or not the N types of features are determined (S108), and if the N types of features have not been determined, the feature selection unit 14 returns to step S107 and continue to determine the remaining types of features. On the other hand, if the N types of features have been determined, the feature selection unit 14 outputs the determined N types of features to a storing unit”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ito by including selecting the features for feature extraction that is taught by Iwamoto, to make the invention that selecting features suitable for image signatures for discriminating images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of determining identity of images. (Iwamoto: Paragraph 25)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Claim(s) 5 and 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Oami Ryoma (WO-2020217368 A1).
Regarding claim 5, Ito discloses the determining includes determining a value of L (the value of L is an integer equal to or larger than one but equal to or smaller than K), which is the number of object images whose features will be extracted of K (the value of K is an integer equal to or larger than two) object images (Figs. 1-3; Paragraph 11: “The feature selection unit 130 selects M (M is a positive integer smaller than N) feature extraction units from N feature extraction units”) Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof.”) with which a second tracking ID different from the first tracking ID is associated, (Paragraph 46: “step 331, the control unit 160 proceeds to step S320 and processes the next image when the control unit 160 determines that detection of the object is unsuccessful ("No" in step S331). The control unit 160 proceeds to step S350 when the control unit 160 determine that detection of the object is successful ("Yes" in step S331).”, the person ordinary skill in the art would know that the process tracking object of the next image is interpreted as “second tracking ID is associated are object images detected in K second image frames”) in accordance with a situation of processing resources of the feature extraction apparatus; . (Paragraph 23: “ In step S340, the object tracking unit 140 tracks object using M features extracted by M feature extraction units selected by the feature selection unit 130.”; Paragraph 11: “The object tracking unit 140 tracks the object using the M features extracted from the selected M (M is a positive integer smaller than N) feature extraction units.”) the sorting includes receiving the K object images and outputting, of the K received object images, (Paragraph 19: “In step S310, the control unit 160 stores image sequence acquired by the acquisition unit 110 in the storage unit 150. … In step S330, the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150.”) the L object images whose features will be extracted to the process of the extracting, and outputting, of the K object images, (K-L) object images whose features will not be extracted other than the L object images whose features will be extracted to the process of the outputting of object information, (Paragraph 30: “In step S350, the feature selection unit 130 selects M feature extraction units from N feature extraction units such that degree in separation of the confidence value c.sub.D which represents object-likelihood between the object and background thereof becomes larger, in order to adapt to change of appearances of the object and background thereof. The output of the unselected N-M feature extraction units is treated as 0 in the calculation of c.sub.”) the N object images with which the first tracking ID is associated are object images detected in N first image frames captured by a first camera, (Fig.1: acquisition unit 110) and the K object images with which the second tracking ID is associated are object images detected in K second image frames (Paragraphs 44-46: “the control unit 160 determines that the present mode is the tracking mode in a case where detection and tracking of the object in the previous image are successful and feature selection is performed for at least one object in step S350. … in step S330, the object detection unit 120 detects objects using N features extracted by the N feature extraction units g.sub.1, g.sub.2, . . . , g.sub.N stored in the storage unit 150. … n step 331, the control unit 160 proceeds to step S320 and processes the next image when the control unit 160 determines that detection of the object is unsuccessful ("No" in step S331). The control unit 160 proceeds to step S350 when the control unit 160 determine that detection of the object is successful ("Yes" in step S331).”, the person ordinary skill in the art would know that the process tracking object of the next image is interpreted as “second tracking ID is associated are object images detected in K second image frames”)
However, Ito does not disclose the K object images with which the second tracking ID is associated are object images detected in K second image frames captured by a second camera different from the first camera.
Ryoma discloses the N object images with which the first tracking ID is associated are object images detected in N first image frames (Paragraphs 26-27 : “In step S104, the selection unit 102 selects only the objects whose feature quantity predicted by the prediction unit 101 in step S102 satisfies a predetermined condition among the plurality of objects. In step S106, the feature amount extraction unit 103 extracts the feature amount from the object selected by the selection unit 102 in step S104.”) captured by a first camera, the K object images with which the second tracking ID is associated are object images detected in K second image frames (Paragraphs 26-27 : “In step S104, the selection unit 102 selects only the objects whose feature quantity predicted by the prediction unit 101 in step S102 satisfies a predetermined condition among the plurality of objects. In step S106, the feature amount extraction unit 103 extracts the feature amount from the object selected by the selection unit 102 in step S104.”) captured by a second camera different from the first camera. ( Paragraph 7: “Features are used to collate objects detected between different cameras and to search for the same or similar objects in previously captured and stored footage.”; Fig.2 and Paragraph 37 : “The image acquisition unit 201 acquires images captured by one or more imaging devices such as cameras (not shown). The photographing device captures an image of an area or an object to be monitored.”; it shows that the plurality of images captured by plurality or two cameras read as “ first image capture by first camera” and “second image capture by second camera” ).
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ito by including the information processing device that is taught by Ryoma, to make the invention that the information processing device selects an object for feature quantity extraction even in a situation where a large number of objects are displayed on the screen; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the variation (diversity) of the acquired features. (Ryoma: Paragraph 114)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 7, Ito discloses the process further comprises: detecting, in each of a plurality of captured images, an object region that corresponds to an object, identifying positions of the respective object regions in the captured images, (Paragraphs 19-20: “the object detection unit 120 detects object using N features extracted by the N feature extraction units 151 (g.sub.1, g.sub.2, . . . , g.sub.N) stored in the storage unit 150. … an area including positions of the input image is set to each position of the input image and classification is performed by extracting features from the set area to classify whether the position is an object. Therefore, the set areas include object and background thereof at the positions near the boundary of the object and background thereof.”)
However, Ito does not disclose attaching object IDs to the respective object regions, and outputting a plurality of object images, each of the object images including a captured image where the object region is detected, image identification information indicating the captured image, information regarding the identified position, and the object ID; and attaching one tracking ID to all object images of one object using the plurality of object images received from the process of the detecting and outputting, to the process of the extracting, the plurality of object images to which the tracking ID is attached.
Ryoma discloses the process further comprises detecting, in each of a plurality of captured images, an object region that corresponds to an object, (Paragraphs 39-40: “The detection unit 202 detects an object from the video output by the video acquisition unit 201, and outputs the detection result as detection result information. When the object is a person, the detection unit 202 detects a person area by using a detector that has learned the image features of the person”) identifying positions of the respective object regions in the captured images, attaching object IDs to the respective object regions, and outputting a plurality of object images, each of the object images including a captured image where the object region is detected, image identification information indicating the captured image, information regarding the identified position, and the object ID; (Paragraph 41: “The detection unit 202 generates detection result information from the information of the detected object. The detection result information includes information for identifying a frame such as frame time information or frame number and information on the detected object. The object information includes the detection position and size of the object. … When a plurality of objects are detected, the detection result information includes the information of the plurality of detected objects in the generated detection result information, and includes an identifier that distinguishes the detected objects within the same frame. The identifier is ID information assigned to distinguish a plurality of objects detected in the same frame,”) attaching one tracking ID to all object images of one object using the plurality of object images received from the process of the detecting and outputting, to the process of the extracting, the plurality of object images to which the tracking ID is attached. (Paragraphs 43-46: “ The tracking unit 203 performs a tracking process called Tracking by Detection based on the detection result information. That is, the tracking unit 203 is included in the tracking result information of the objects up to the previous time, and which detection object included in the detection result information of the current time corresponds to each object to be tracked. And update the tracking results. The tracking unit 203 may predict the position of the object to be tracked by the Kalman filter or the particle filter and associate it with the detected object at the current time … The tracking result information includes the position and size of the object on the image, the identifier assigned to each tracking object, and the associated detection object identifier.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ito by including the information processing device that is taught by Ryoma, to make the invention that the information processing device selects an object for feature quantity extraction even in a situation where a large number of objects are displayed on the screen; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the variation (diversity) of the acquired features as well as reducing the calculation cost required for extracting the feature amount. (Ryoma: Paragraph 180)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 8, Ito, as modified by Ryoma, discloses all the claims invention. Ryoma further discloses the process further comprises: detecting, in each of a plurality of first image frames captured by the first camera, an object region that corresponds to an object, (Paragraphs 38-40: “The video acquisition unit 201 outputs the acquired video as a moving image sequence to the detection unit 202, the tracking unit 203, and the feature amount extraction unit 208. … The detection unit 202 detects an object from the video output by the video acquisition unit 201, and outputs the detection result as detection result information. When the object is a person, the detection unit 202 detects a person area by using a detector that has learned the image features of the person”) identifying positions of the respective object regions in the first image frames, attaching object IDs to the respective object regions, and outputting a plurality of first object images, each of the first object images including a first image frame where the object region is detected, image identification information indicating the first image frame, information regarding the identified position, and the object ID; (Paragraph 41: “The detection unit 202 generates detection result information from the information of the detected object. The detection result information includes information for identifying a frame such as frame time information or frame number and information on the detected object. The object information includes the detection position and size of the object. … When a plurality of objects are detected, the detection result information includes the information of the plurality of detected objects in the generated detection result information, and includes an identifier that distinguishes the detected objects within the same frame. The identifier is ID information assigned to distinguish a plurality of objects detected in the same frame,”) attaching one tracking ID to all first object images of one object using the plurality of first object images received from the process of the detecting, and outputting, to the feature extraction apparatus, the plurality of first object images to which the tracking ID is attached; (Paragraphs 43-46: “ The tracking unit 203 performs a tracking process called Tracking by Detection based on the detection result information. That is, the tracking unit 203 is included in the tracking result information of the objects up to the previous time, and which detection object included in the detection result information of the current time corresponds to each object to be tracked. And update the tracking results. The tracking unit 203 may predict the position of the object to be tracked by the Kalman filter or the particle filter and associate it with the detected object at the current time … The tracking result information includes the position and size of the object on the image, the identifier assigned to each tracking object, and the associated detection object identifier.”) detecting, in each of a plurality of second image frames captured by the second camera, an object region that corresponds to an object, (Paragraphs 38-40: “The video acquisition unit 201 outputs the acquired video as a moving image sequence to the detection unit 202, the tracking unit 203, and the feature amount extraction unit 208. … The detection unit 202 detects an object from the video output by the video acquisition unit 201, and outputs the detection result as detection result information. When the object is a person, the detection unit 202 detects a person area by using a detector that has learned the image features of the person”) identifying positions of the respective object regions in the second image frames, attaching object IDs to the respective object regions, and outputting a plurality of second object images, each of the second object images including a second image frame where the object region is detected, image identification information indicating the second image frame, information regarding the identified position, and the object ID; (Paragraph 41: “The detection unit 202 generates detection result information from the information of the detected object. The detection result information includes information for identifying a frame such as frame time information or frame number and information on the detected object. The object information includes the detection position and size of the object. … When a plurality of objects are detected, the detection result information includes the information of the plurality of detected objects in the generated detection result information, and includes an identifier that distinguishes the detected objects within the same frame. The identifier is ID information assigned to distinguish a plurality of objects detected in the same frame,”) and attaching one tracking ID to all second object images of one object using the plurality of second object images received from the process of the detecting, and outputting the plurality of second object images to which the tracking ID is attached to the process of the extracting. (Paragraphs 43-46: “ The tracking unit 203 performs a tracking process called Tracking by Detection based on the detection result information. That is, the tracking unit 203 is included in the tracking result information of the objects up to the previous time, and which detection object included in the detection result information of the current time corresponds to each object to be tracked. And update the tracking results. The tracking unit 203 may predict the position of the object to be tracked by the Kalman filter or the particle filter and associate it with the detected object at the current time … The tracking result information includes the position and size of the object on the image, the identifier assigned to each tracking object, and the associated detection object identifier.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ito by including the information processing device that is taught by Ryoma, to make the invention that the information processing device selects an object for feature quantity extraction even in a situation where a large number of objects are displayed on the screen; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the variation (diversity) of the acquired features as well as reducing the calculation cost required for extracting the feature amount. (Ryoma: Paragraph 180)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito et al (U.S. 20100034464 A1; Ito), in view of Oami Ryoma (WO-2020217368 A1), and in further view of Iwamoto et al (U.S. 20110135203 A1; Iwamoto).
Regarding claim 6, Ito, as modified by Ryoma, discloses all the claims invention except wherein the process includes: selecting, from P (the value of P is an integer equal to or larger than N) object images with which the first tracking ID is associated, the N object images based on P priorities associated with the respective P object images and outputting the N selected object images to the process of the sorting, and outputting (P-N) object images other than the N selected object images to the process of the outputting of object information, and selecting, from Q (the value of Q is an integer equal to or larger than K) object images with which the second tracking ID is associated, the K object images based on Q priorities associated with the Q respective object images and outputting the K selected object images to the process of the sorting, and outputting (Q-K) object images other than the K selected object images to the process of the outputting of object information.
Iwamoto discloses selecting, from P (the value of P is an integer equal to or larger than N) object images with which the first tracking ID is associated, (Paragraph 93-94: “ Next, the feature extraction parameter generation unit 12 generates, for each of the M types of features, a feature extraction parameter which is a parameter for extracting a feature from the image, and stores it in the feature extraction parameter storing unit 23 (S102).Then, the feature extraction section 131 of the feature extraction unit 13 extracts the M types of features from each of the original images in the original image storing unit 21 in accordance with the extraction parameters for the M types of features, and stores them in the feature storing unit 24 (S103).”) the N object images based on P priorities associated with the respective P object images and outputting the N selected object images to the process of the sorting, (Paragraph 95: “ the feature selection unit 14 receives the M types of features of the original images and the altered images stored in the feature storing unit 24, … and selects N types of features from the M types of features, with the discrimination capability which is a degree of discriminating different images and the robustness which is a degree that the value of a feature does not vary due to an alteration process applied to an image being evaluation criteria. and outputs them (S105) “) and outputting (P-N) object images other than the N selected object images to the process of the outputting of object information. (Paragraph 98: “Then, the feature selection unit 14 judges whether or not the N types of features are determined (S108), and if the N types of features have not been determined, the feature selection unit 14 returns to step S107 and continue to determine the remaining types of features. On the other hand, if the N types of features have been determined, the feature selection unit 14 outputs the determined N types of features to a storing unit”), and selecting, from Q (the value of Q is an integer equal to or larger than K) object images with which the second tracking ID is associated, ( Paragraphs 27-28: “The original image storing unit 21 is an image database which stores a plurality of original images in association with image IDs such as numbers for uniquely identifying the respective original images. … … the group of original images stored in the original image storing unit 21 is used for selecting features suitable for image signatures, it is desirable to include as many original images as possible (for example, not less than ten thousand images).”, it show the selecting features suitable is perform on plurality of images IDs read as “second tracking ID”; Paragraph 93-94: “ Next, the feature extraction parameter generation unit 12 generates, for each of the M types of features, a feature extraction parameter which is a parameter for extracting a feature from the image, and stores it in the feature extraction parameter storing unit 23 (S102).Then, the feature extraction section 131 of the feature extraction unit 13 extracts the M types of features from each of the original images in the original image storing unit 21 in accordance with the extraction parameters for the M types of features, and stores them in the feature storing unit 24 (S103).”) the K object images based on Q priorities associated with the Q respective object images and outputting the K selected object images to the process of the sorting, (Paragraph 95: “ the feature selection unit 14 receives the M types of features of the original images and the altered images stored in the feature storing unit 24, … and selects N types of features from the M types of features, with the discrimination capability which is a degree of discriminating different images and the robustness which is a degree that the value of a feature does not vary due to an alteration process applied to an image being evaluation criteria. and outputs them (S105) “) and outputting (Q-K) object images other than the K selected object images to the process of the outputting of object information. (Paragraph 98: “Then, the feature selection unit 14 judges whether or not the N types of features are determined (S108), and if the N types of features have not been determined, the feature selection unit 14 returns to step S107 and continue to determine the remaining types of features. On the other hand, if the N types of features have been determined, the feature selection unit 14 outputs the determined N types of features to a storing unit”).
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Ito and Ryoma by including selecting the features for feature extraction that is taught by Iwamoto, to make the invention that selecting features suitable for image signatures for discriminating images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of determining identity of images. (Iwamoto: Paragraph 25)
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kuzuka et al (U.S. 20170220894 A1), “Image Processing Device, Image Processing Method and Program”, teaches about an image processing device that detects an object on a moving image, the device including: a plurality of object feature amount calculation units configured to calculate an object feature amount, which is related to an object on an image; a feature amount selection unit configured to select an object feature amount to be used from the plurality of object feature amounts; an object information calculation unit configured to calculate object information related to an object on the current frame by using the selected feature amount and object information related to the object on the previous frame; and an object attribute information calculation unit configured to calculate object attribute information related to the object on the current frame on the basis of the calculated object information related to the object on the current frame.
Wang et al (U.S. 20190205694 A1), “Multi-Resolution Feature Description For Object Recognition”, teaches about techniques and systems are provided for determining features for one or more objects in one or more video frames. A size of the object can be determined based on the image, for example based on inter-eye distance of a face. Based on the size, either a high-resolution set of features or a low-resolution set of features is selected to compare to the features of the object. The object can be identified by matching the features of the object to matching features from the selected set of features.
Takada (U.S. 20180137632 A1), “ System and Method of Hybrid Tracking for Match Moving”, teaches about a system and method that maximizes tracking speed of an object in a sequence of images by selecting a technique for tracking the object independently for each frame in a video. The system includes an object feature detector that detects object features in a reference frame of the video and a feature comparator that determines a number of object features in each frame in the sequence of images that match the detected object features in the reference frame. Moreover, a tracking pattern selector selects the type of object tracking to track the object in the current frame based on the determined matched object features between the reference frame and the current frame of the video.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY TRAN/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674