DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “an obtaining function configured to obtain,” “a reducing function configured to … reduce,” “a detecting function configured to detect,” and “an inserting function configured to insert” in claim 12.
One of ordinary skill in the art would understand that said “functions” have sufficient structure, materials, or acts to perform the function because they are a part of a camera system.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-8, 11, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boice et al. (US 2006/0126738 A1), in view of Piekniewski et al. (US 2016/0086052 A1), hereinafter referred to as Boice and Piekniewski, respectively.
Regarding claim 1, Boice teaches a method for masking a detected object in a video stream captured by a camera arranged in a camera system including the camera and at least one device, wherein a location and field of view is known for the device and a location and a field of view is known for the camera, and wherein the field of view of the device and the field of view of the camera are non-overlapping (Boice Fig. 11: see FOV camera Y and FOV camera X, the two FOVs are non-overlapping; Boice ¶¶0048: “initially describing the camera field of view and the camera field of view progression, as well as defining variables and formulas used in the calculation of the pan, tilt and/or adjustment data”), the method comprising:
obtaining, in the camera, information indicating that an object is approaching the field of view of the camera, wherein the obtained information is determined from information from the device indicating a location and a direction of movement of the object and the known locations and fields of view of the camera and the device (Boice Fig. 11: see 1115’s movement indicated by 1125; Boice ¶¶0111: “An object 1115 is shown on a walkway 1120, such as an airport concourse. Object 1115 is a person walking from right to left (as shown in FIG. 11) on walkway 1120. The motion of object 1115 is denoted by arrow 1125”).
However, Boice does not appear to explicitly teach in response to the obtained information, reducing a threshold for detecting objects that are to be masked in the video stream captured by the camera, wherein the threshold specifies a confidence over which an object is determined to belong to an object class to be masked in the video stream, or wherein the threshold is for what is detected as foreground and what is detected as background and specifies an amount of change in image data in an image frame relative to image data in a preceding image frame over which an object is determined to belong to the foreground; detecting an object that is to be masked in the video stream using the reduced threshold; and inserting masking of the detected object in the video stream.
Pertaining to the same field of endeavor, Piekniewski teaches, in response to the obtained information, reducing a threshold for detecting objects that are to be masked in the video stream captured by the camera, wherein the threshold specifies a confidence over which an object is determined to belong to an object class to be masked in the video stream, or wherein the threshold is for what is detected as foreground and what is detected as background and specifies an amount of change in image data in an image frame relative to image data in a preceding image frame over which an object is determined to belong to the foreground (Note that only one of the alternative limitations is required by the claim language. Piekniewski ¶¶0104: “the threshold level may be determined dynamically based on analysis of one or more preceding images … the threshold may be configured to maximize classification success rate and/or ratio … the threshold may be configured based on an evaluation of pixel distribution (e.g., shape of the histogram)”; Piekniewski ¶¶0105: “In some dynamic saliency threshold implementations, the saliency threshold may be gradually decreased below the peak value, to produce an increasingly large contour around the saliency peak”);
detecting an object that is to be masked in the video stream using the reduced threshold (Piekniewski ¶¶0137: “object tracker of the disclosure may use the saliency mask in order to locate an object of interest. The saliency mask may be used to prime/initialize one or more object trackers”); and
inserting masking of the detected object in the video stream (Piekniewski Fig. 4; Piekniewski ¶¶0068: “The video information may comprise for example multiple streams of frames received from a plurality of cameras disposed separate from one another”).
Boice and Piekniewski are considered to be analogous art because they are directed to image processing for tracking objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method, system, and program for cameras to track an object using motion vector data (as taught by Boice) to use a threshold for detecting foreground vs. background (as taught by Piekniewski) because the combination can detect moving objects dynamically (Piekniewski ¶¶0105).
Regarding claim 2, Boice, in view of Piekniewski, teaches the method according to claim 1, wherein the obtained information further indicates a subarea of the field of view of the camera in which the object is expected to first appear, and wherein, in the act of reducing, the threshold is reduced only in the indicated subarea (Piekniewski ¶¶0104-¶¶0105: “In order to determine the mask extent (the region of the visual field) occupied by the most salient color surface, the following algorithm may be used, according to some implementations. Location of the high saliency area in saliency map (e.g., area of low likelihood (e.g., less than 10%) pixels of a given minimum size (e.g., 100 pixels) in FIG. 2A) may be determined … In some dynamic saliency threshold implementations, the saliency threshold may be gradually decreased below the peak value, to produce an increasingly large contour around the saliency peak”).
Regarding claim 3, Boice, in view of Piekniewski, teaches the method according to claim 1, wherein the obtained information further indicates at which point in time the object will first appear in the field of view of the camera, and wherein the threshold is reduced starting from the indicated point in time (Boice Figs. 5 & 9; Boice ¶¶0050, ¶¶0058-¶¶0060 teach calculating based on the time intervals and past/current/future fields to determine when the object will appear in FOV 2; also see Piekniewski ¶¶0121-¶¶0122: “Image 410 may denote motion based saliency map determined from motion analysis (denoted by arrow 406) between two or more successive images (e.g., 400 and an image taken at another instance in time) … Image 410 may denote information obtained using a kinematic tracker process. By way of an illustration, area denoted by arrow 406 may denote a kinematic prior (e.g., corresponding to a location of an object at a prior time)”).
Regarding claim 4, Boice, in view of Piekniewski, teaches the method of claim 1, wherein the obtained information from the device further indicates that the object is masked by the device (Piekniewski Figs. 2, 4).
Regarding claim 5, Boice, in view of Piekniewski, teaches the method according to claim 1, wherein the obtained information from the device further indicates an object class of the object (Boice ¶¶0107: “This tolerance (ΔAR) may be specified in a lookup table based on the type of object being tracked”; Piekniewski ¶¶0162: “machine learning system; e.g., one trained to classify the presence of object of interest”).
Regarding claim 6, Boice, in view of Piekniewski, teaches the method according to claim 5, wherein the obtained information from the device further indicates a confidence in relation to the object class (Boice ¶¶0107 & Piekniewski ¶¶0162 discussed above; Boice ¶¶0143: “tracking accuracy”; Piekniewski ¶¶0121: “Motion-based saliency may be configured based on confidence of motion estimation. Different shades in image 410 may denote motion detection confidence. e.g., white area 414 denoting 100%, black denoting 0%, and grey (e.g., 412) denoting an intermediate value, e.g., 0.67)”).
Regarding claim 7, Boice, in view of Piekniewski, teaches the method according to claim 1, wherein, in the act of reducing, the threshold is reduced in an area long a periphery of the field of view (Piekniewski ¶¶0104 & ¶¶0105 discussed above teaches contour outline threshold adjustment).
Regarding claim 8, Boice, in view of Piekniewski, teaches the method according to claim 1, wherein the device is one of:
a second camera, an IR camera, a thermal camera, a radar, a sonar, and a lidar (Boice Figs. 2 & 11).
Regarding claims 11-12, Boice, in view of Piekniewski, further teaches a camera and non-transitory computer-readable storage medium (Boice Figs. 1-2). Therefore, Claims 11-12 are rejected using the same rationale as applied to claim 1 discussed above.
Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boice et al. (US 2006/0126738 A1), in view of Piekniewski et al. (US 2016/0086052 A1), and further in view of Henson (US 2005/0123172 A1), hereinafter referred to as Boice, Piekniewski, and Henson, respectively.
Regarding claim 9, Boice, in view of Piekniewski, teaches the method according to claim 1, but does not appear to explicitly teach that after detecting the object in the camera field of view, increasing the threshold; and on condition that the object is detected using the increased threshold: maintaining masking of the detected object in the video stream; or on condition that the object is not detected using the increased threshold: discontinuing masking of the detected object in the video stream.
Pertaining to the same field of endeavor, Henson teaches, after detecting the object in the camera field of view, increasing the threshold; and on condition that the object is detected using the increased threshold: maintaining masking of the detected object in the video stream; or on condition that the object is not detected using the increased threshold: discontinuing masking of the detected object in the video stream (Henson ¶¶0054: “analysis of incoming video data”; Henson ¶¶0075: “Image overlays 815 and privacy masks 816 are provided on the processing system 509 in this embodiment to facilitate a useful level of monitoring system operation, without invading privacy or violating privacy laws”; Henson ¶¶0106: “The noise comparison process 1210 compares the proportion of isolated foreground pixels with a target value of around 0.2%. If the proportion of isolated foreground pixels (due to noise) is below this target, the comparison process generates a negative output, thus lowering the threshold supplied to the classification process 1206. This results in a probable increase in the number of isolated foreground pixels when the next image frame is processed. If, alternatively, the proportion of isolated foreground pixels is higher than the configured target (around 0.2%), the threshold is increased, thereby reducing the number of isolated foreground pixels that are found in the next frame”).
Boice, in view of Piekniewski, and Henson are considered to be analogous art because they are directed to image processing for tracking objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method, system, and program for cameras to track an object using motion vector data and thresholds (as taught by Boice, in view of Piekniewski) to either increase or decrease the threshold (as taught by Henson) because the combination can detect foreground objects of different sizes (Henson ¶¶0106).
Regarding claim 10, Boice, in view of Piekniewski and Henson, teaches the method according to claim 9, wherein the act of increasing the threshold is performed a predetermined time after detecting the object (Henson ¶¶0146: “When detecting activities of potential interest, a likely scenario is for a person or object to move past a digital monitoring camera such that, at the start of the period of activity only part of the body or object is in view”; Henson ¶¶0149: “first camera has a period of activity 2103 resulting in a snapshot 2104 being recorded. Similarly, a second camera has a period of activity 2105 resulting in a snapshot 2106 being recorded. Finally, a snapshot 2107 is recorded in response to a period of activity identified from a period of activity 2108 in response to signals processed from a third camera”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOO J SHIN whose telephone number is (571)272-9753. The examiner can normally be reached M-F; 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Soo Shin/Primary Examiner, Art Unit 2667