DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is in response to the Application Filed on 04/07/2023
Claims 1–15 are pending in this application.
Drawings
The drawing(s) filed on 04/07/2023 are accepted by the Examiner.
Response to Amendment
Applicant’s Amendments filed on 09/02/2025 has been entered and made of record.
Currently pending Claim(s):
Independent Claim(s):
Amended Claim(s):
1–15
1, 14 and 15
1, 10 and 13
Response to Applicant’s Arguments
This office action is responsive to Applicant’s Arguments/Remarks Made in an Amendment received on 09/02/2025.
In view of applicant Arguments/Remarks and amendment filed on 09/02/2025 with respect to independent claims 1, 14 and 15 under 35 U.S.C 103, claim rejection has been fully considered and the arguments are found to be not persuasive (See Page 11), therefore the claim rejection with respect to 35 U.S.C. 103 and 102 still applies.
Applicant argues, in summary the applied prior art (Tomohiro) does not disclose or suggest (See page 11):
“Tomohiro does not teach or suggest determining the final group using the similarity between two groups obtained from different captured image”
However, the Examiner respectfully disagrees with applicant’s line of reasoning. The Examiner has thoroughly reviewed the Applicant arguments but respectfully believes that the cited reference to reasonably and properly meet the claimed limitations.
Tomohiro cites in reference: ¶ [0023], “If the comparison determines that the person included in the provisional group related to the entrance image and the person included in the provisional group related to the store image are the same person, the reconstruction unit 322 merges those provisional groups. The reconstructing unit 322 outputs the integrated group as a confirmed group”. Tomohiro determines a final group based on if two groups have a person sharing the same facial information. If so, then the two groups are merged into a confirmed group which the examiner is interpreting as the final group. Tomohiro compares the groups from two different images, specifically the in-store image and the entrance image which the examiner is interpreting as the different captured images.
Therefore, with this broad interpretation, Tomohiro discloses the Applicant’s invention, selecting a person and making a group based on the spatial and state condition in a first image and selecting a second person based on an attribute of the person from among the plurality of person within the first shot image and setting a second group based on the spatial and state condition. Comparing the attributes of each person selected from each group and calculate the similarity and setting the person in the first group as the final group if the similarity condition is met. Thus, due to Applicant’s broad claim language, Applicant’s invention is not far removed from the art of the record. Accordingly, these limitations do not render claims patentably distinct over the prior art of record. As a result, it is respectfully that the present application is not in condition for allowance.
Thus, the Examiner maintains that the limitations as presented and as rejected were properly and adequately met. The rejection as presented in the Non-Final rejection is maintained regarding the above limitation. Additional citations and/or modified citations may be present to more concisely address the limitations. However, the grounds of rejection remain the same.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1–3, 6, 8–9, 12–15 are rejected under 35 U.S.C. 102 as being unpatentable over Tomohiro (JP 2017/130061 A, hereafter, "Tomohiro").
Regarding claim 1, Tomohiro discloses a group specification apparatus for specifying a group from a shot image, comprising: at least one memory storing instructions (See Tomohiro, ¶ [0025], The CPU 201 uses a RAM 202 as a work memory, executes programs stored in a ROM 203 or a storage unit 209 , and controls the configuration described below via a system bus 208 . 310 The storage unit 209 is a hard disk drive (HDD), a solid state drive (SSD), a flash memory, or the like, and stores an OS and a program for implementing image processing, which will be described later); and at least one processor configured to execute the instructions to:
select a person from among a plurality of persons within a first shot image, and set a first group candidate, based on a spatial condition stipulating a position of another person and a state condition stipulating a state of the other person, with reference to the selected person (See Tomohiro, ¶ [0039], In the video analysis result 711 relating to a successful example of in-store images from the in-store camera 104, two groups, group A and group B, are estimated as shown by the dashed lines 712 and 713 indicating the grouping. This is because the grouping unit 314 calculates the movement direction and movement speed 714, 715 of all people, as well as the distances 716, 717 between people, by comparing multiple image frames in the video, and assumes that multiple people engaging in similar behavior belong to one group. ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups. [Fig. 6], 712, 713. Note: Examiner is interpreting the comparison as selecting a person candidate person from each group);
select a person from among a plurality of persons within a second shot image having a different shooting time from the first shot image, using an attribute of the person selected from among the plurality of persons within the first shot image (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups. ¶ [0023], If the comparison determines that the person included in the provisional group related to the entrance image and the person included in the provisional group related to the store image are the same person, the reconstruction unit 322 merges those provisional groups. The reconstructing unit 322 outputs the integrated group as a confirmed group. Note: Each person is selected from the plurality of people in the group and their facial features are compared with the facial features of another group), and set a second group candidate, based on the spatial condition and the state condition, with reference to the selected person (See Tomohiro, ¶ [0039], In the video analysis result 711 relating to a successful example of in-store images from the in-store camera 104, two groups, group A and group B, are estimated as shown by the dashed lines 712 and 713 indicating the grouping. This is because the grouping unit 314 calculates the movement direction and movement speed 714, 715 of all people, as well as the distances 716, 717 between people, by comparing multiple image frames in the video, and assumes that multiple people engaging in similar behavior belong to one group. ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups. [Fig. 6], 712, 713. Note: Examiner is interpreting the comparison as selecting a person candidate person from each group. Since the groups are made by direction and movement the examiner is interpreting that as the spatial and state condition and a person is selected based on the facial features);
compare first attribute configuration information including an attribute of each person constituting the first group candidate with second attribute configuration information including an attribute of each person constituting the second group candidate, and calculate a similarity between the first group candidate and the second group candidate (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups. ¶ [0023], If the comparison determines that the person included in the provisional group related to the entrance image and the person included in the provisional group related to the store image are the same person, the reconstruction unit 322 merges those provisional groups. The reconstructing unit 322 outputs the integrated group as a confirmed group. Note: Examiner is interpreting the cameras comparing all provisional groups and the cameras own provisional groups that where estimated); and
specify the persons constituting the first group candidate as one group, if the calculated similarity satisfies a set condition (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups, and if the same people are present, merges these provisional groups, and if not, makes them independent groups).
Regarding claim 2, Tomohiro discloses the group specification apparatus according to claim 1, wherein, further at least one processor configured to execute the instructions to: set the second group candidate, for each of a plurality of second shot images having different shooting times (See Tomohiro, ¶ [0039], This is because the grouping unit 314 calculates the movement direction and movement speed 714, 715 of all people, as well as the distances 716, 717 between people, by comparing multiple image frames in the video, and assumes that multiple people engaging in similar behavior belong to one group. [Fig. 6], 712, 713),
perform for each of the plurality of second shot images, comparison of the attributes of the persons constituting the first group candidate that are included in the first attribute configuration information with the attributes of the persons constituting the second group candidate that are included in the second attribute configuration information, and calculation of the similarity (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups), and
determine whether the set condition is satisfied, using the similarity calculated for each of the plurality of second shot images, and, if the set condition is satisfied, specifies specify the persons constituting the first group candidate as one group (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups, and if the same people are present, merges these provisional groups, and if not, makes them independent groups).
Regarding claim 3, Tomohiro discloses the group specification apparatus according to claim 1, wherein the spatial condition includes the other person being present within a set range centered on the selected person (See Tomohiro, ¶ [0039], This is because the grouping unit 314 calculates the movement direction and movement speed 714, 715 of all people, as well as the distances 716, 717 between people, by comparing multiple image frames in the video, and assumes that multiple people engaging in similar behavior belong to one group), and
the state condition includes the other person facing the selected person or facing a same direction as the selected person (See Tomohiro, ¶ [0042], The second calculation formula is defined so that if the group attribute is a family, the accuracy is higher when the movement direction and speed are similar, the distance between people is close, and the number of people is closer to 3 to 5. The second calculation formula is defined so that if the group attribute is a couple, the accuracy is higher the closer the movement direction and speed are and the closer the distance between people are, and also so that the number of people is limited to two. The second calculation formula is defined so that if the group attribute is friends, the moving direction and speed are close, the closer the certain distance is to 2 m, the higher the accuracy, and any number of people greater than or equal to two can be used. [Fig. 7]. See also [Fig. 5], (a). Note: Examiner of interpreting the movement direction as the people facing the same direction).
Regarding claim 6, Tomohiro discloses the group specification apparatus according to claim 1, wherein the first attribute configuration information is constituted in a form where each attribute of the persons constituting the first group candidate is organized by the number of persons having the attribute (See Tomohiro, [Fig. 7], Number of People, Group Attributes. ¶ [0041], FIG. 7 is a data structure diagram showing an example of the behavior condition storage unit 326),
the second attribute configuration information is constituted in a form where each attribute of the persons constituting the second group candidate is organized by the number of persons having the attribute (See Tomohiro, [Fig. 7], Number of People, Group Attributes. ¶ [0041], FIG. 7 is a data structure diagram showing an example of the behavior condition storage unit 326), and
further at least one processor configured to execute the instructions to: calculate, as the similarity, an inner product of the first attribute configuration information and the second attribute configuration information, or a Euclidean distance therebetween (See Tomohiro, ¶ [0064], Instead of cosine similarity, measures such as Euclidean distance, Mahalanobis distance, Pearson's correlation coefficient, etc. may be used).
Regarding claim 8, Tomohiro discloses the group specification apparatus according to claim 1, wherein the shooting time of the second shot image is earlier than the shooting time of the first shot image, and the first shot image and the second shot image are shot by the same image capturing apparatus (See Tomohiro, ¶ [0031], A first image frame 401 of the entrance camera 102 captures three customers 402 moving in the same direction at a constant speed, but two customers 403 standing outside the entrance 109 are not yet captured. A second image frame 411 of the entrance camera 102 taken a certain time later shows two customers 412 who were outside earlier, but three customers 413 have passed through the shooting area of the entrance camera 102 and are not captured. ¶ [0032], Therefore, the video analysis unit 321 groups together image frames over a certain period, for example, five seconds, into image frame groups 511 to 514, and performs processing by regarding each image frame group as a target for image analysis. Note: More than one group can be determined by the 102 camera).
Regarding claim 9, Tomohiro discloses the group specification apparatus according to claim 1, wherein the first shot image and the second shot image are shot by different image capturing apparatuses, and the image capturing apparatus that shot the first shot image (See Tomohiro, ¶ [0019], The detection unit 313 obtains the video from the entrance camera 102 and the video from the in-store cameras 104 and 105 from the recorded video storage unit 312 via the camera control unit 311) and
the image capturing apparatus that shot the second shot image are disposed so as to be able to shoot a same subject within a predetermined time range (See Tomohiro, ¶ [0019], The detection unit 313 obtains the video from the entrance camera 102 and the video from the in-store cameras 104 and 105 from the recorded video storage unit 312 via the camera control unit 311. ¶ [0032], Therefore, the video analysis unit 321 groups together image frames over a certain period, for example, five seconds, into image frame groups 511 to 514, and performs processing by regarding each image frame group as a target for image analysis).
Regarding claim 12, Tomohiro discloses the group specification apparatus according to claim 1, wherein, further at least one processor configured to execute the instructions to: in a case where there are a plurality of first group candidates respectively specified as one group, determine whether there are persons common between the plurality of first group candidates respectively specified as one group, and integrate first group candidates determined to have common persons into one group (See Tomohiro, ¶ [0044], The video analysis unit 321 compares the facial feature amounts of the people belonging to these provisional groups, and if the same people are present, merges these provisional groups, and if not, makes them independent groups).
Regarding claim 13, Tomohiro discloses the group specification apparatus according to claim 1, wherein, further at least one processor configured to execute the instructions to: a case where the persons constituting the first group candidate are not specified as one group (See Tomohiro, ¶ [0045], At this point, all remaining people in the undetermined provisional group are considered to be single customers),
newly select a person who has not yet been selected, from among the plurality of persons within the first shot image, and newly sets the first group candidate, newly set the second group candidate when the first group candidate is newly set (See Tomohiro, ¶ [0045], In the example of FIG. 8, the video analysis section 321 extracts the provisional group with the highest score (provisional group E (70 points)) and the provisional group with the second highest score (provisional group G (65 points)) from the remaining undetermined provisional groups. ¶ [0045], It is determined that the same person (person c1) is present in the extracted tentative group E and tentative group G, and the two tentative groups are integrated. As a result of the integration, a confirmed group Z consisting of persons c1, c2, and c3 is reconstructed. Note: Examiner is interpreting the compared people as the select person, because you have to select a person to compare),
newly calculate the similarity when the first group candidate and the second group candidate are newly set (See Tomohiro, ¶ [0045], It is determined that the same person (person c1) is present in the extracted tentative group E and tentative group G, and the two tentative groups are integrated. As a result of the integration, a confirmed group Z consisting of persons c1, c2, and c3 is reconstructed), and
specify a group, using the newly calculated similarity (See Tomohiro, ¶ [0045], As a result of removing the persons c1, c2, and c3 belonging to group Z from the undetermined provisional groups, the group accuracy of provisional group B is updated from 30 to 60. The video analysis section 321 extracts the provisional group with the highest score (provisional group B (60 points)) and the provisional group with the second highest score (provisional group D (50 points)) from the remaining undetermined provisional groups. It is determined that the same persons (persons b1 and b2) are present in the extracted provisional group B and provisional group D, and the two provisional groups are integrated. As a result of the integration, a confirmed group Y consisting of persons b1 and b2 is reconstructed).
Regarding claim 14, claim 14 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 14, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference.
Regarding claim 15, claim 15 is rejected the same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to the claim 15, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Furthermore, Tomohiro teaches A non-transitory computer-readable recording medium that includes a program recorded thereon for specifying a group from a shot image by a computer, the program including instructions that cause the computer to carry out (See Tomohiro, ¶ [0025], The CPU 201 uses a RAM 202 as a work memory, executes programs stored in a ROM 203 or a storage unit 209 , and controls the configuration described below via a system bus 208 . 310 The storage unit 209 is a hard disk drive (HDD), a solid state drive (SSD), a flash memory, or the like, and stores an OS and a program for implementing image processing, which will be described later).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claim(s) 4, 7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tomohiro et al. (JP 2017/130061 A, hereafter, “Tomohiro”) in view of Sagawa et al. (US 2011/0267489 A1, hereafter, "Sagawa").
Regarding claim 4, Tomohiro teaches the group specification apparatus according to claim 3, [wherein the state condition further includes a size of the other person being within a set range referenced on a size of the selected person].
However, Tomohiro fail(s) to teach wherein the state condition further includes a size of the other person being within a set range referenced on a size of the selected person.
Sagawa, working in the same field of endeavor, teaches: wherein the state condition further includes a size of the other person being within a set range referenced on a size of the selected person (See Sagawa, ¶ [0120], In step S1802, the face tracking processing unit 106 groups the faces at the position thereof in the horizontal direction according to the position coordinates acquired in step S1801. For example, the faces can be grouped in the following manner. More specifically, the face tracking processing unit 106 can acquire a Y-coordinate maximum value f_max and a Y-coordinate minimum value f_min of the face in the previous frame Img_p. Furthermore, the face tracking processing unit 106 can group the faces in the unit of a group satisfying a condition of the face size).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tomohiro’s reference wherein the state condition further includes a size of the other person being within a set range referenced on a size of the selected person based on the method of Sagawa’s reference. The suggestion/motivation would have been to accurately track the position of faces even during movement (See Sagawa, ¶ [0004–0017]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Sagawa with Tomohiro to obtain the invention as specified in claim 4.
Regarding claim 7, Tomohiro teaches the group specification apparatus according to claim 1, wherein, further at least one processor configured to execute the instructions to: [in a case of specifying the persons constituting the first group candidate as one group, output at least one of a position, size, orientation, attribute and the first attribute configuration information of each person constituting the first group candidate].
However, Tomohiro fail(s) to teach in a case of specifying the persons constituting the first group candidate as one group, output at least one of a position, size, orientation, attribute and the first attribute configuration information of each person constituting the first group candidate.
Sagawa, working in the same field of endeavor, teaches: in a case of specifying the persons constituting the first group candidate as one group, output at least one of a position, size, orientation, attribute and the first attribute configuration information of each person constituting the first group candidate (See Sagawa, ¶ [0119], Referring to FIG. 19, in step S1801, the face tracking processing unit 106 acquires position coordinates of the face finally associated in step S207 by referring to the correspondence chart illustrated in FIG. 18. ¶ [0120], In step S1802, the face tracking processing unit 106 groups the faces at the position thereof in the horizontal direction according to the position coordinates acquired in step S1801. For example, the faces can be grouped in the following manner. More specifically, the face tracking processing unit 106 can acquire a Y-coordinate maximum value f_max and a Y-coordinate minimum value f_min of the face in the previous frame Img_p. Furthermore, the face tracking processing unit 106 can group the faces in the unit of a group satisfying a condition of the face size).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tomohiro’s reference in a case of specifying the persons constituting the first group candidate as one group, output at least one of a position, size, orientation, attribute and the first attribute configuration information of each person constituting the first group candidate based on the method of Sagawa’s reference. The suggestion/motivation would have been to accurately track the position of faces even during movement (See Sagawa, ¶ [0004–0017]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Sagawa with Tomohiro to obtain the invention as specified in claim 7.
Regarding claim 10, Tomohiro teaches the group specification apparatus according to claim 1, wherein, further at least one processor configured to execute the instructions to: [set a partial region of the second shot image as a search range, based on the shooting time of the first shot image, the shooting time of the second shot image, and the position of the person selected from among the plurality of persons within the first shot image, and select a person from the set search range].
However, Tomohiro fail(s) to teach set a partial region of the second shot image as a search range, based on the shooting time of the first shot image, the shooting time of the second shot image, and the position of the person selected from among the plurality of persons within the first shot image, and select a person from the set search range.
Sagawa, working in the same field of endeavor, teaches: set a partial region of the second shot image as a search range, based on the shooting time of the first shot image, the shooting time of the second shot image, and the position of the person selected from among the plurality of persons within the first shot image, and select a person from the set search range (See Sagawa, ¶ [0105], The face tracking processing unit 106 calculates an average value ave_fr of the ratio of variation of the size of the faces, which are associated between the previous frame and the current frames. Furthermore, the face tracking processing unit 106 multiplies the size of the undetected face in the previous frame by the average value ave_fr to calculate the undetected face size s_ff. ¶ [0109], As described above, in the present exemplary embodiment, the face tracking processing unit 106 calculates a motion vector of faces by utilizing the position and the size of the faces detected in both the previous frame Img_p and the current frame Img_in).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tomohiro’s reference set a partial region of the second shot image as a search range, based on the shooting time of the first shot image, the shooting time of the second shot image, and the position of the person selected by the first group candidate setting means, and select a person from the set search range based on the method of Sagawa’s reference. The suggestion/motivation would have been to accurately track the position of faces even during movement (See Sagawa, ¶ [0004–0017]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Sagawa with Tomohiro to obtain the invention as specified in claim 10.
Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Tomohiro et al. (JP 2017/130061 A, hereafter, “Tomohiro”) in view of Sukegawa et al. (JP 2012003623 A, hereafter, "Sukegawa").
Regarding claim 5, Tomohiro teaches the group specification apparatus according to claim 1, wherein the first attribute configuration information is constituted by label data respectively representing the attributes of the persons constituting the first group candidate (See Tomohiro, ¶ [0020], More specifically, the grouping unit 314 refers to the age and gender condition holding unit 325 that holds the age and gender conditions for grouping, and divides the people detected from the entrance image into provisional groups. [FIG. 5], 325, Group Attributes. Note: Examiner is interpreting the group attributes as labels),
the second attribute configuration information is constituted by label data respectively representing the attributes of the persons constituting the second group candidate (See Tomohiro, ¶ [0041], The behavior condition storage unit 326 stores group attributes and behavior criteria to which the group attributes are assigned in association with each other. [FIG. 7], 326, Group Attributes. Note: Examiner is interpreting the group attributes as labels), and
further at least one processor configured to execute the instructions to: [calculate, as the similarity, a ratio of persons having same attributes to a number of all persons in the first group candidate and second group candidate combined].
However, Tomohiro fail(s) to teach calculate, as the similarity, a ratio of persons having same attributes to a number of all persons in the first group candidate and second group candidate combined.
Sukegawa, working in the same field of endeavor, teaches: calculate, as the similarity, a ratio of persons having same attributes to a number of all persons in the first group candidate and second group candidate combined (See Sukegawa, ¶ [0059], Next, as a second method, by obtaining a difference in average similarity, a ratio of average similarities, or a degree of separation between two similarity groups of the top N (N is equal to that of the first method) search results and the N + 1 th to Mth (M is equal to or larger than N + 1 and equal to or smaller than the number of items registered in the person information management unit 160) search results, an attribute having the largest index indicates that "there is registration information that is closer than other registration information", and indicates that the attribute is effective as a search result).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tomohiro’s reference to calculate, as the similarity, a ratio of persons having same attributes to a number of all persons in the first group candidate and second group candidate combined based on the method of Sukegawa’s reference. The suggestion/motivation would have been to accurately match face images (See Sukegawa, ¶ [0002–0005, 0007–0008]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Sukegawa with Tomohiro to obtain the invention as specified in claim 5.
Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Tomohiro et al. (JP 2017/130061 A, hereafter, “Tomohiro”) in view of Nishikawa et al. (US 2019/0019019 A1, hereafter, "Nishikawa").
Regarding claim 11, Tomohiro teaches the group specification apparatus according to claim 8, wherein, further at least one processor configured to execute the instructions to: [in a case of specifying the persons constituting the first group candidate as one group, specify a group serving as a sample that conforms with the first group candidate, by checking the first attribute configuration information of the first group candidate against a database in which a plurality of groups serving as samples and attribute configuration information thereof are registered in advance].
However, Tomohiro fail(s) to teach in a case of specifying the persons constituting the first group candidate as one group, specify a group serving as a sample that conforms with the first group candidate, by checking the first attribute configuration information of the first group candidate against a database in which a plurality of groups serving as samples and attribute configuration information thereof are registered in advance.
Nishikawa, working in the same field of endeavor, teaches: in a case of specifying the persons constituting the first group candidate as one group, specify a group serving as a sample that conforms with the first group candidate, by checking the first attribute configuration information of the first group candidate against a database in which a plurality of groups serving as samples and attribute configuration information thereof are registered in advance (See Nishikawa, ¶ [0155], The group determination methods described above are described for exemplary purposes only, and the disclosure is not limited to these methods. For example, in one method, a database is prepared which registers a group ID and person face image IDs of multiple persons in association with each other in advance. If the face images of the persons are detected, the persons are determined to be in the same group).
Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Tomohiro’s reference in a case of specifying the persons constituting the first group candidate as one group, specify a group serving as a sample that conforms with the first group candidate, by checking the first attribute configuration information of the first group candidate against a database in which a plurality of groups serving as samples and attribute configuration information thereof are registered in advance based on the method of Nishikawa’s reference. The suggestion/motivation would have been to provide accurate tracking of position of the person (See Nishikawa, ¶ [0002–0008]).
Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Nishikawa with Tomohiro to obtain the invention as specified in claim 11.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hirooka et al. (US 20170366685 A1) teaches the information processing apparatus has an obtainment unit (315) that is configured to obtain image attribute information. A derivation unit (314) is configured to derive image group attribute information. A comparison unit is configured to compare image group attribute information that the derivation unit derived for a second image group includes images captured in a second time range different to the first time range. A decision unit is configured to decide the item for the recommendation to the user in accordance with a result of the comparison by the comparison unit.
Yamaguchi (US 20120170856 A1) teaches according to a conventional image classification device that extracts a feature from an image and classifies the image with use of the extracted feature, in the case where one image and another one image, which are included in an image group, each have a different feature, the one image and the other one image might be each classified into a different category. In order to solve this problem, an image classification device relating to the present invention calculates, with respect to each of persons appearing in a plurality of images included in an image group which have been photographed with respect to one event, a main character degree that is an index indicating an important degree in units of image groups, and classifies the images into any one of different classification destination events in units of image groups based on the calculated main character degrees.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DION J SATCHER/Patent Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676