Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed January 5, 2026 have been fully considered but they are not persuasive.
Regarding claim 1,
(1) applicant states that “The cited references do not disclose, teach or suggest at least "wherein the detection data on the location and appearance of the object contained in the captured video is generated on the basis of a feature map," (emphasis added), as recited in amended independent claim 1.” and “The Office alleges that the heat map referred to in Kim is the equivalent of the feature map in the present disclosure. The recitation from Kim above indicates that the heat map generates data of an object's location; however, the recitation does not indicate that the heat map of Kim generates data of an object's appearance, as is cited in amended claim 1.”. Examiner disagrees with this statement. Kim teaches the detection data on the location (The motion object extraction unit 243 processes the difference image to extract the foreground object and obtain the coordinate value. Once the coordinates of the target object are obtained, feature values of the object area can be extracted and tracked continuously in subsequent frames. The feature value of the object can use a color histogram, edge information, the size of the object, and the like. Page 9 3rd paragraph) and appearance of the object (In addition, the result detected by the frontal view detection unit 423 (or the rearal view detection unit 424) may include a face area, and information on the face area may be used in the face detection unit 440. Page 6 2nd paragraph. 2 or 4, the customer attribute recognition unit 230 may extract the customer attribute information from the object information input from the object detection unit 210. [ Customer attribute means a characteristic that can be classified by type such as sex, history, facial expression, and the like. Page 7 6th paragraph) contained in the captured video (METHOD FOR CUSTOMER ANALYSIS BASED ON TWO-DIMENSION VIDEO ANDAPPARATUS FOR THE SAME. Title) is generated on the basis of a feature map (The feature extraction unit 422 may divide each of the pyramid images into small blocks and extract a local feature for each block. The feature extraction unit 422 extracts features such as a MCT (Modified Census Transform), a color value, a Histogram of Oriented Gradients (HOG), and the like, and combines one or more of these features with the inputted pyramid image. The extracted features maybe input to the front and back detection unit 423 and the backward detection unit 424, respectively. Page 5 6th paragraph. The frontal view detection unit 423 and the rearal view detection unit 424 can determine whether a frontal view of a person exists or a backward view of a person exists in the extracted image block. Page 5 7th paragraph).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (Korean Patent Publication No.: KR 20170006356 A), hereinafter Kim, in view of Ko (Korean Patent Publication No.: KR 101448392 B1), hereinafter Ko.
Regarding claim 1, Kim teaches a method for analyzing a visitor on the basis of a video (METHOD FOR CUSTOMER ANALYSIS BASED ON TWO-DIMENSION VIDEO AND APPARATUS FOR THE SAME. Title; see also Abstract) in an edge computing environment (The image analysis apparatus 120 may be installed in a store (i.e., in an edge computing environment rather than a server) or installed in a separate PC or server at a separate place. Page 3 5th paragraph), the method comprising the steps of: extracting feature data (The feature extraction unit 422 may divide each of the pyramid images into small blocks and extract a local feature for each block. Page 5 6th paragraph) from a captured video (The video information from each camera may be provided to the video stream composition and transmission unit 330 through various interfaces. Page 4 4th paragraph) of an offline space (In the following description, the customer information acquisition technology and the customer information analysis technology according to the present invention are applied to the off-line store, but the scope of the present invention is not limited thereto. Page 3 2nd paragraph); generating detection data on a location (The heat map can display the distribution of a plurality of objects in the camera view and the frequency of staying at a specific location. Page 9 8th paragraph) and an appearance of an object contained in the captured video from the feature data (2 or 4, the customer attribute recognition unit 230 may extract the customer attribute information from the object information input from the object detection unit 210. [ Customer attribute means a characteristic that can be classified by type such as sex, history, facial expression, and the like. Page 7 6th paragraph) using an artificial neural network-based detection model (The map learning method is a type of machine learning for deriving a function from training data. The training data typically includes the attributes for the input object in vector form, indicating what the desired result is for each vector. Page 5 last paragraph); [[and]] integrating detection data on a location and an appearance of a target object (As described above, the present invention can provide at least one of outbound counting, customer attribute recognition, face recognition (i.e., customer identification), and moving line analysis of a customer visiting a specific place using a two-dimensional image obtained by a camera have. Page 9 9th paragraph. In addition, in the present invention, instead of using a separate face detection unit for recognizing the customer attribute, the face detection unit 440 of the object detection unit 210 described with reference to FIG. By reusing or sharing detection results, the entire process can be speeded up. Page 7 11th paragraph); tracking the target object in the captured video with reference to the integratively generated detection data (The object detecting unit 210 may include at least one of a motion detecting unit, a head-and-shoulder (HS) detecting unit, a tracking unit, and a face detecting unit, as described later. Page 3 8th paragraph); wherein the detection data on the location (The motion object extraction unit 243 processes the difference image to extract the foreground object and obtain the coordinate value. Once the coordinates of the target object are obtained, feature values of the object area can be extracted and tracked continuously in subsequent frames. The feature value of the object can use a color histogram, edge information, the size of the object, and the like. Page 9 3rd paragraph) and appearance of the object (In addition, the result detected by the frontal view detection unit 423 (or the rearal view detection unit 424) may include a face area, and information on the face area may be used in the face detection unit 440. Page 6 2nd paragraph. 2 or 4, the customer attribute recognition unit 230 may extract the customer attribute information from the object information input from the object detection unit 210. [ Customer attribute means a characteristic that can be classified by type such as sex, history, facial expression, and the like. Page 7 6th paragraph) contained in the captured video (METHOD FOR CUSTOMER ANALYSIS BASED ON TWO-DIMENSION VIDEO ANDAPPARATUS FOR THE SAME. Title) is generated on the basis of a feature map (The feature extraction unit 422 may divide each of the pyramid images into small blocks and extract a local feature for each block. The feature extraction unit 422 extracts features such as a MCT (Modified Census Transform), a color value, a Histogram of Oriented Gradients (HOG), and the like, and combines one or more of these features with the inputted pyramid image. The extracted features maybe input to the front and back detection unit 423 and the backward detection unit 424, respectively. Page 5 6th paragraph. The frontal view detection unit 423 and the rearal view detection unit 424 can determine whether a frontal view of a person exists or a backward view of a person exists in the extracted image block. Page 5 7th paragraph).
Kim does not teach the following limitations as further recited, but Ko further teaches determining entry or exit of the target object (The number of visitors and the number of passengers are counted using Line Drawing method for the tracked objects. That is, two virtual reference lines A (inside) and B (outside) are generated at a position of 30 pixels from the upper and lower sides of the image, with the resolution of the upper camera image being 640 x 480 pixels (Pixel) The number of entries (IN) is counted if the first pass of B passes through A, and the number of outlets (OUT) passed first through A and passed through B are respectively counted by the counting unit 540 shown in FIG. 8. Page 6 9th paragraph) by determining whether the target object passes a predetermined detection line with reference to information on the tracking of the target object (More specifically, when an object of a human head is traced and the tracked object passes the virtual reference line shown in FIG. 20, it is determined whether or not to count the access. Page 10 9th paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kim to incorporate the teachings of Ko to determine entry or exit of the target object by determining whether the target object passes a predetermined detection line with reference to information on the tracking of the target object in order to accurately count the number of passengers who are moving.
Claims 4-6, unamended and are rejected based on the revised combination of Kim, in view of Ko as applied to claim 1 above. The grounds of rejection established in the last Office Action is fully incorporated herein.
Claim 10 is drawn to a non-transitory computer-readable storage medium having executable instructions stored for carrying out the method of claim 1. Therefore, claim 10 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above.
Apparatus claim 11 is drawn to the apparatus corresponding to the method of using same as claimed in claim 1. Therefore apparatus claim 11 corresponds to method claim 1, and is rejected for the same reasons of obviousness as used above.
Claims 3 and 7, unamended and are rejected based on the combination of Kim, in view of Ko. The grounds of rejection established in the last Office Action is fully incorporated herein.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668