Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s response to the last office action, filed August 5, 2025 has been entered and made of record. Claims 6, 9 have been amended. Claims 1-10 are currently pending in this application.
Response to Arguments
Applicant's arguments filed August 5, 2025 have been fully considered but they are not persuasive.
Applicant asserted, (Page 11, 2nd paragraph), that bounding boxes obtained
from the object detection model, are not a fixed area, and the bounding boxes follow objects detected by the object detection model for monitoring and tracking the objects.
The Examiner respectfully disagrees, because Shin et al clearly discloses the estimated target location, which represents the effective detection area, and the data association module 214 can compare a location of a respective bounding box region to a location of an estimated target location and determine, based on the comparison, whether the bounding box region is located within a threshold proximity of the estimated target location, (see at least: col. 11, lines 48, through col, 12, line 47). Further, in response to Applicant’s arguments that the bounding box region is not fixed, it is noted that the claim language does not define or specify that the effective detection area is a fixed region. The applicant asserted on Page 9 that the white rectangular frames displayed in the thermal image frames shown in Figs. 5A to 5D may be the area of the bed, (specification, Par. 0050). However, it is noted that the features upon which applicant relies (i.e., the effective detection area being an area of the bed) is not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
For the reasons stated above, the rejection of claims 1 and 6, and their dependent claims was proper, and it is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, and 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe et al, (US-PGPUB 20190147292)
In regards to claim 1, Watanabe discloses image selection apparatus,
comprising:
at least one memory configured to store instructions; and at least one processor configured to execute the instructions to perform operations comprising:
acquiring query information indicating a pose of a person, (see at least: Par. 0039-0043, the pose estimating unit 106 recognizes the pose information included in the input image, ..and the system executes pose recognizing processing for each detected region for at least one person included in the image; and from Par. 0047, the query generating unit 110 converts the pose information obtained by the pose inputting unit 109 into a retrieval query, and generating muti-query based on the plurality of pose features and image features, and metadata such as attributes, times, and places can be added to the retrieval condition, “[i.e., implicitly acquiring query information, “muti-query based on the plurality of pose features and image features”, indicating a pose of a person, “implicitly indicating person’s pose included in the image”]); and
selecting at least one target image from a plurality of images subject to selection by using the query information, (see at least: Par. 0032, 0044, the image database 108 holds the image information and the personal information obtained by the registering processing; and from Par. 0048, the image retrieving unit 111 obtains the corresponding registered data from the image database 108 by using a query vector obtained by the query generating unit 110, ….such as the square Euclidean distance is shorter, the image is closer to the registered data which matches the retrieval condition, [i.e., selecting at least one target image from a plurality of images subject to selection, “selecting the closest image to the registered data which matches the retrieval condition”, by using the query information, “using a query vector obtained by the query generating unit 110”]. See also, Fig. 7, and Par. 0073),
wherein the query information includes relative positions of a plurality of keypoints indicating different portions of a human body from each other, (see at least: Par. 0030, feature points, (head, neck, right shoulder, right elbow…. left ankle), are detected by image recognizing processing and have information on coordinates and reliability in an image; and from Par. 0046-0047, the pose information includes a set of a plurality of feature points, and the feature point has the coordinates and the reliability, where the query information is based on the pose information, [i.e., wherein the query information includes relative positions of a plurality of keypoints, “implicit by pose information includes a set of a plurality of feature points”, indicating different portions of a human body from each other, “the feature point has the coordinates implicitly relative to the human body”]);
selecting at least one target image comprises determining weighting for at least one of the keypoints degree of similarity, for example, image features of a person image may be used, or pose features calculated from feature points of parts other than defect parts may be used. Further, the features extracting unit 107 estimates the coordinates of the lacking feature point from the set of coordinates obtained in step S703 (S704), based on weighting, “determining weighting for at least one of the keypoints”, [i.e., selecting at least one target image, “at least one similar image”, comprises determining weighting for at least one of the keypoints, “coordinates of the lacking feature point from the set of coordinates obtained based on weighting”]); and
selecting the at least one target image by using relative positions of the plurality of keypoints of a person included in the image subject to selection, the query information, and the weighting, (see at least: Fig. 9, and Par. 0080-0083, the pose inputting unit 109 receives the pose information input by the user (S901), which the pose information is a set of feature points, and the feature point is indicated by coordinate values; and the query generating unit 110 converts the pose information input in step S901 into the pose features (S902), where the image retrieving unit 111 retrieves similar images from the image database 108 according to the pose features obtained in step S902 and the retrieval condition obtained in step S903 (S904), and…..when the image features are given, the distance of the image features and the distance of the pose features are integrated, rearranged, and output. As an integration method of the distances, two distances may be simply added, and a distance may be normalized or weighted, [i.e., selecting the at least one target image, “implicit by retrieving similar images from the image database 108”, by using relative positions of the plurality of keypoints of a person, included in the image subject to selection, “implicit by feature points indicated by coordinate values”, the query information, “implicit by pose features”, and the weighting, “implicit by weighting distances”]).
Watanabe does not expressly disclose determining weighting for at least one of the keypoints by using a difference between reference pose information including reference relative positions of the plurality of keypoints and the query information.
However, Watanabe discloses that the image retrieving apparatus 104 obtains similar images (603, 604, and 605) from the image database 108 and complements position information on lacking feature points from the pose information of the similar images (pose information 606), where for calculation of the degree of similarity, for example, image features of a person image may be used, or pose features, “reference pose information”, calculated from feature points of parts other than defect parts may be used; and the features extracting unit 107 estimates the coordinates of the lacking feature point from the set of coordinates obtained in step S703 (S704), based on weighting, according to the degree of similarity, (i.e., score), (see at least: Par. 0072-0073); and the square Euclidean distance is used as an index of the degree of similarity between the images, “square Euclidean distance index in the degree of similarity implicit the difference between images relative to the pose features”, (see at least: Par. 0048), which is indication that the degree of similarity is technically determined based on the difference between reference pose information including reference relative positions of the plurality of keypoints and the query information; and that the weighting for at least one of the keypoints is determined by using a difference between reference pose information including reference relative positions of the plurality of keypoints and the query information.
In regards to claim 2, Watanabe obviously discloses the limitations of claim 1.
Watanabe further discloses wherein the operations comprise, when a difference, in each of the plurality of keypoints, between the reference relative position of the keypoint in the reference pose information and a relative position of the keypoint in the query information increases, increasing the weighting for the keypoint, (see at least: Par. 0072-0073, where the estimating coordinates of the lacking feature point from the set of coordinates obtained in step S703 (S704), based on weighting, according to the degree of similarity, (i.e., score), [Accordingly, if the degree of similarity, (i.e., score) increases, the weight for keypoint technically increases]).
In regards to claim 4, Watanabe obviously discloses the limitations of claim 1.
Watanabe further discloses wherein the operations comprise generating the reference pose information by processing the plurality of images subject to selection, (see at least: Par. 0036, a recognition target region is extracted from the still image data or the moving image data accumulated in the image storing apparatus 101 as necessary, and the pose information is obtained from the extracted region by the image recognizing processing and is registered in the image database 108, ”generating the reference pose information by processing the plurality of images”; and from Par. 0080-0083, the plurality of images are implicitly subject to selection, “see the rejection of claim 1 for more details”).
In regards to claim 5, Watanabe obviously discloses the limitations of claim 1.
Watanabe further discloses wherein the operations comprise determining the reference relative position by performing statistical processing on relative positions of the plurality of keypoints in each of at least two of the images subject to selection, (Par. 0030, feature points are detected …. “reliability” here is a value indicating a probability that the feature point exists at the detected coordinates, and is calculated based on statistical information, “performing statistical processing on relative positions of the plurality of keypoints in each of at least two of the images subject to selection”. See also, Par. 0040, 0065, the pose estimating processing is executed by using the regression model, “implicitly based on statistical processing”, which outputs coordinate values of the feature points from the input image, [i.e., determining the reference relative position, “outputs coordinate values of the feature points from the input image”, by performing statistical processing on relative positions of the plurality of keypoints in each of at least two of the images subject to selection, “implicitly performing statistical processing by the regression model on the coordinate values of the feature points from the input image”]).
In regards to claim 6, Watanabe obviously discloses the limitations of claim 1.
Watanabe further discloses wherein the operations comprise acquiring the reference pose information according to an input from a user, (see at least: Par. 0042-0043, acquiring pose features, “reference pose information”, and makes a pose identifier learn the collected features by machine learning, and identifying the pose, by using the pose identifier and the appearance of the image; and from Par. 0046, the pose inputting unit 109 receives the pose information which is input by a user via the input apparatus 102, [i.e., acquiring the reference pose information, “acquiring pose features”, according to an input from a user, “the pose information is input by a user”]).
Regarding claim 7, claim 7 recites substantially similar limitations as set forth in claim 1. As such, claim 7 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “an image selection method”. However, Watanabe discloses the “image selection method”, (see at least: Par. 0007, “an image retrieving method “).
Regarding claim 8, claim 8 recites substantially similar limitations as set forth in claim 1. As such, claim 8 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “non-transitory computer-readable medium storing a program for causing a computer to perform operations”. However, Watanabe discloses the “a non-transitory computer-readable medium storing a program for causing a computer to perform operations”, (see at least: Par. 0052, processing program 203 stored in the storage apparatus 202, “a non-transitory computer-readable medium storing a program”).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Watanabe et al, (US-PGPUB 20190147292) in view of Yerushalmy et al, (US-Patent 11,861,944)
Watanabe obviously discloses the limitations of claim 1.
Watanabe does not expressly disclose wherein the reference pose information indicates relative positions of the plurality of keypoints when a person has an upright pose and both hands are lowered along a body.
However, Yerushalmy et al discloses wherein the reference pose information indicates relative positions of the plurality of keypoints when a person has an upright pose and both hands are lowered along a body, (see at least: col. 13, lines 9-24, the point analysis module 308 may determine correspondence between the locations of the points determined from the video data 104(1) and pose data 310, which the pose data 310 may associate point locations 312 for sets of points with corresponding pose identifiers 314 indicative of particular poses 202, “the reference pose information indicates relative positions of the plurality of keypoints”. For example, a first set of point locations 312(1) may be associated with a pose identifier 314(1) that corresponds to an upright pose 202, “see Fig. 2, pose 201(1), which represent when a person has an upright pose and implicitly both hands are lowered along a body), while a second set of point locations 312(2) may be associated with a pose identifier 314(2) corresponding to a prone pose 202, [i.e., wherein the reference pose information indicates relative positions of the plurality of keypoints, “the pose data 310 may associate point locations 312 for sets of points”, when a person has an upright pose and both hands are lowered along a body, “pose 201(1) in Fig. 2”]).
Watanabe and Yerushalmy are combinable because they are both concerned with analyzing human’s poses. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Watanabe, to use the point analysis module 308, as though by Yerushalmy, in order to associate a set of point locations with a pose identifier that corresponds to an upright pose, (Yerushalmy, col. 13, lines 9-24).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 10/16/2025