DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 7-9, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Toshiki et al (JP 2019-101664 A). The Toshiki reference disclose an estimation system comprising:
a first estimator configured to estimate, based on an heat distribution detected by an infrared sensor (“infrared camera”)1, three-dimensional positions of heads of a plurality of people in a target space2; and
a second estimator configured to estimate, based on data on the three-dimensional positions estimated by the first estimator, whether or not the plurality of people are in close contact with each other3.
With respect to claim 2, the system of Toshiki has an output (display)4.
With respect to claim 3, the estimator is configured to determine congestion based on the duration of time that people are detected5.
With respect to claims 7-9, the sensor is disclosed as infrared cameras.
With respect to claim 11, the software is being run on a server.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Toshiki et al (JP 2019-101664 A) in view of Taguchi (WO 2010-098024 A1). The Toshiki reference does not mention calculating the trajectories of the heads of the people imaged in order to estimate congestion. However, the Taguchi reference teaches that in a very crowded situation, calculating the trajectory, and not just the current position, of the individuals in an image can help eliminate miscounts (see “Background-Art” section), and give a better estimate of present and future congestion, and it would have been obvious to the ordinary practioner to modify the software of Toshiki to do the same to improve its congestion detection algorithm.
Conclusion
Claims 5, 6, and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RANDY W GIBSON whose telephone number is (571)272-2103. The examiner can normally be reached Tue-Friday 10AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Macchiarolo can be reached at 571-272-2375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
RANDY W. GIBSON
Primary Examiner
Art Unit 2856
/RANDY W GIBSON/Primary Examiner, Art Unit 2855
1 “…As surveillance camera 2, a box type camera, a dome camera, a network camera etc. are mentioned, for example. The network camera may, for example, be an IP (Internet Protocol) camera. In addition, in the shop 3 in which the illumination inside or outside the shop is dark, a night vision camera such as an infrared camera may be used as the monitoring camera 2. For example, cameras for various uses such as a security camera, a surveillance camera, a street camera, and a fixed point camera may be used as the surveillance camera 2….”
2 “…The position of the head may be identified, for example, as a rectangular area surrounding the head. Hereinafter, the rectangular shape surrounding the head may be referred to as a "box". The shape of the box is not limited to the rectangular shape, and may be a circular shape, an elliptical shape, a polygonal shape, or the like. The score is an example of information indicating the certainty (likelihood) that the detected area is the head of a human body, the maximum value of which is “1.000”. The example of FIG. 3 is a frame photographed at 12 o'clock ("12:00"). By setting the object to be detected as the head of a person (human body), it is possible to capture a part of the human body that is likely to move to the monitoring camera 2 and improve detection accuracy. Further, by detecting an object by deep learning, it is possible to detect the head of a person in various postures or states other than the front face. For example, the NN 13 may have various postures such as the back (the back of the head; see A in FIG. 3), the upper surface (the top of the head (rolled posture); B), and the sides (cross-face; see C). Even the head can be detected accurately. In addition, the NN 13 is in a state in which the head is hidden by another object, for example, a state in which it is blocked by a chair (refer to code D), a cap state (refer to code E), or others or obstacles in crowded places Even in the state of being blocked by an object, etc., the head can be detected accurately. The NN 13 may store the detected position and score of the head in the memory unit 11 as the detection information 112. The detection information 112 may be generated for each frame, or in the detection information 112, identification information of a frame may be associated with an object. The detection information 112 may be a file of various formats. For example, the detection information 112 may be a file of a format such as CSV (Comma-Separated Values). FIG. 4 is an example of the detection information 112. As shown in FIG. 4, the detection information 112 may illustratively include information of “No.”, “x1”, “y1”, “x2”, “y2”, and “score”. The example of FIG. 4 shows information of an object detected in one frame (image)…”
3 “…The congestion degree calculation unit 124 may calculate the congestion rate based on the detection target frame, for example, by calculating “the number of seats with people / the total number of seats”. In addition, the congestion degree calculation unit 124 sets “Threshold” / “Normal” / “Heavy” by setting a threshold such as “0.3” or “0.6” to the calculated congestion rate. The degree of congestion (discrete value) representing the degree of congestion status such as In the above example, although the congestion level is estimated to be three levels, the number of levels of the congestion level may be increased or decreased by increasing or decreasing the number of threshold values….”
4 “…The terminal device 6 is a computer that receives information on the degree of congestion of the store 3 estimated by the server 4. For example, the terminal device 6 may be a computer possessed by a candidate who uses the store 3 who is interested in the crowded situation of the seat of the store 3. Examples of the terminal device 6 include various information processing devices such as a PC (Personal Computer) such as a desktop, a laptop, or a mobile, a tablet, a smartphone, and a mobile phone…”
5“…Although it depends on the service content and the time zone provided by the store 3, the period in which the customer uses the seat in the store 3 is considered to be ten minutes to several tens of minutes (or more than one hour). The surveillance camera 2 can capture image data of one frame (1 FPS; Frame Per Second) or more at least per second, but all frames to detect changes in seating, such as seating, leaving and leaving of the customer
Analyzing the data of is inefficient. For this reason, in machine learning, image data of a part of frames of the captured image data may be used. For example, in one embodiment, frames acquired at a predetermined sampling interval, such as five minutes apart from an image, may be input to the neural network. Also, in one embodiment, an estimation period of, for example, several hours to several days may be provided to estimate the seat position. In the following description, it is assumed that an estimation period of 3 days (predetermined period) is provided as an example…”