DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number EP 23172013.7 dated 8 May 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 2 May 2024 has been considered and placed in the application file.
Specification - Abstract
Applicant is reminded of the proper content of an abstract of the disclosure.
A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art.
If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives.
Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps.
Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should not contain legal language such as comprising. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. The sheet or sheets presenting the abstract may not include other parts of the application or other material.
See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.
Specification
The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code. Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code (paragraphs [0004], references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. Appropriate correction is required.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claim 5 recites “one or more of.” Since “one or more of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. Claims 4, 6, 8 and 9 recite “or.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim(s) 4 and 9 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
According to MPEP 2143.03 (I), “If a claim is subject to more than one interpretation, at least one of which would render the claim unpatentable over the prior art, the examiner should reject the claim as indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph (see MPEP § 2175) and should reject the claim over the prior art based on the interpretation of the claim that renders the prior art applicable. (Ex parte Ionescu, 222 USPQ 537 (Bd. Pat. App. & Inter. 1984)”
Claim(s) 4 and 9 recite “and/or” It is unclear whether the claim elements are intended to be conjunctive or disjunctive. However, for searching for limitations, the interpretation that the elements are disjunctive has been used.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 7 reciting “a noise level …is disregarded” are rejected under 35 U.S.C. 112(d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claims, amend the claims to place the claims in proper dependent form, rewrite the claims in independent form, or present a sufficient showing that the dependent claims complies with the statutory requirements.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1- 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process using images/ drawings (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc.), and mathematical calculations for likelihood/ probability (e.g., - P(A) = f / N Where P(A) = Probability of an event (event A) occurring; f = Number of ways an event can occur (frequency); N = Total number of outcomes possible).
This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such.
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1 and 13 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories?
YES. Claims 1-12 are directed to a method, i.e., process, and claim 13 is directed to a computer readable medium i.e., a computer.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
YES, the claims are directed toward a mental process (i.e., abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The method in claim 1, for example, comprises a mental process that can be practicably performed in the human mind therefore, an abstract idea.
Claim 1 recites:
receiving a video stream…
detecting an object in the image frames and generating a bounding box…
measuring a noise level…
temporally filtering the bounding box over a plurality of image frames…
These limitations, as drafted, under their broadest reasonable interpretation, cover performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
As such, a person could present image(s)/ drawing(s) and overlay the bounding boxes on the image(s)/ drawing(s) associated with corresponding outline(s) with a degree of error or lack thereof either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a processor (e.g., processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
If a claim limitation, under its broadest reasonable interpretation, covers performance of a mental step which could be performed with a simple tool such as a pen and paper, then it falls within the “mental steps” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Thus, Claims 1- 13 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
Thus, since Claims 1-13 are/is: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, claims 1and 13 are not eligible subject matter under 35 U.S.C 101. Similar analysis is made for the dependent claims 2-12 and the dependent claims are similarly identified as: being directed towards an abstract idea, not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-14 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0057588 A1, (Savvides et al.) in view of US Patent Publication 2005 0135698 A1, (Yatsenko et al.).
[AltContent: textbox (Savvides et al. Fig. 3, showing noise encapsulated by bounding boxes.)]
PNG
media_image1.png
594
788
media_image1.png
Greyscale
Claim 1
Regarding Claim 1, Savvides et al. teach a method of stabilizing bounding boxes for objects in a video stream ("The bounding boxes can be spatially and temporally filtered to eliminate potential objects," paragraph [0004]), the method comprising:
receiving a video stream comprising a sequence of image frames ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]);
detecting an object in the image frames and generating a bounding box surrounding the object ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]);
measuring a noise level for the video stream ("Too small of bounding boxes are likely to correspond to individual pixels or tiny blobs, rather than objects of interests. Such bounding boxes may result from noise, for example, or from small changes in the lighting conditions, rather than an actual object moving across the scene, and may be safely removed," paragraph [0034] where too small teaches a noise level); and
temporally filtering the bounding box over a plurality of image frames based on the measured noise level, thereby stabilizing the bounding box in the video stream ("Temporally filtering the bounding boxes can include object motion analysis and object tracking," paragraph [0008]),
wherein the bounding box is temporally filtered over a number of preceding image frames ("In addition, an object tracker module 422 can be used to eliminate bounding boxes that are not tracked in multiple frames," paragraph [0036] where not tracked in multiple frames teaches filtering over preceding frames).
Savvides et al. do not explicitly teach all of a higher noise level implies a temporal filtering over a greater number of preceding image frames and a lower noise level implies a temporal filtering over a smaller number of preceding image frames.
[AltContent: textbox (Yatsenko Et al. Fig. 1, showing using Temporal filters.)]
PNG
media_image2.png
621
806
media_image2.png
Greyscale
However, Yatsenko et al. teach the number of preceding image frames is adapted such that a higher noise level implies a temporal filtering over a greater number of preceding image frames and a lower noise level implies a temporal filtering over a smaller number of preceding image frames ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005]).
Therefore, taking the teachings of Savvides et al. and Yatsenko et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Robust Motion Filtering for Real-Time Video Surveillance” as taught by Savvides et al. to use “Image Noise Reduction Using a Spatiotemporal Recursive Filter” as taught by Yatsenko et al. The suggestion/motivation for doing so would have been that, “However, if an object is moving, a pixel's value may widely vary over successive images. Hence, successive images may contain dissimilar information. Averaging of dissimilar information may produce a value that may not resemble the true value of the current pixel. Therefore, temporal filtration is an unsatisfactory method of enhancing a moving image because the averaging of frames may produce unwanted motion blur or motion lag.” as noted by the Yatsenko et al. disclosure in paragraph [0006], which also motivates combination because the combination would predictably have a better reaction to moving objects in a moving image as there is a reasonable expectation that temporal filtering of moving objects is difficult; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of computer readable medium claim 13 and system claim 14 while noting that the rejection above cites to both device and method disclosures. Claims 13 and 14 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, as noted above.
Savvides et al. do not explicitly teach all of a position of the temporally filtered bounding box in a given image frame is a combination, such as an average.
However, Yatsenko et al. teach wherein a position of the temporally filtered bounding box in a given image frame is a combination, such as an average, of positions of the bounding box for a number of preceding image frames ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005]).
Savvides et al. and Yatsenko et al. are combined as per claim 1.
Claim 3
Regarding claim 3, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 2, as noted above.
Savvides et al. is not relied upon to teach all of adapting the number of preceding image frames to the measured noise level.
However, Yatsenko et al. teach comprising the step of adapting the number of preceding image frames to the measured noise level ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005]).
Savvides et al. and Yatsenko et al. are combined as per claim 1.
Claim 4
Regarding claim 4, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the noise is visual noise in the video stream, the noise is a dot pattern, such as a pixel pattern, varying, such as randomly varying, between the image frames, superimposed on the image frames, and/or wherein the noise level is a magnitude of variation of a dot pattern between the image frames ("These changes correspond to both valid moving objects and false detections or noise. In one embodiment, an object of interest segmentation algorithm can use a background differentiation approach in order to estimate new objects that have entered the scene. Such an algorithm utilizes the difference between consecutive frames to identify moving objects in the scene. This difference image is then thresholded to determine bounding boxes for potential objects," paragraph [0024] where background differentiation teaches a magnitude of variation of a dot pattern).
Claim 5
Regarding claim 5, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the noise comprises one or more of fluctuations of color, luminance, or contrast, internal noise, such as noise caused by electricity, heat or illumination levels, compression artifacts, or interference noise, such as Gaussian noise, fixed-pattern noise, salt and pepper noise, shot noise, quantization noise or anisotropic noise ("valid moving objects can be detected, identified, and tracked against a variety of background by first filtering out nearly all invalid detections such as plant motions, environmental noise, and sudden lighting changes" paragraph [0026] where sudden lighting changes are illumination levels).
Claim 6
Regarding claim 6, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the noise level is measured for an entire region of the image frames, a region covering the bounding box, or a region covering the bounding box and an additional region surrounding the bounding box ("Bounding box classification is necessary in order to determine whether the detected region corresponds to a valid detected object or to irrelevant changes not caused by moving objects (e.g. lighting changes)," paragraph [0025] where lighting changes are noise).
Claim 7
Regarding claim 7, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein a noise level for sub-regions comprising moving objects is disregarded in the step of measuring the noise level ("Bounding box classification is necessary in order to determine whether the detected region corresponds to a valid detected object or to irrelevant changes not caused by moving objects (e.g. lighting changes)," paragraph [0025] where lighting changes are noise).
Claim 8
Regarding claim 8, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the step of temporally filtering the bounding box over a plurality of image frames based on the measured noise level is performed for every N:th image frame, where N is an integer greater than 1, or wherein the step of temporally filtering the bounding box over a plurality of image frames based on the measured noise level is performed for every image frame ("The recursive temporal filter 110 produces a weighted average of a previous frame Yprnv and an input signal x," paragraph [0026] where producing a weighted average of a previous frame is filtering for every image frame).
Claim 9
Regarding claim 9, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the noise level for any given point in time is measured backwards in time for a number of image frames and/or for a time window ("The recursive temporal filter 110 produces a weighted average of a previous frame Yprnv and an input signal x," paragraph [0026] where producing a weighted average of a previous frame is filtering for a number of image frames, for example, one).
Claim 10
Regarding claim 10, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, as noted above.
Savvides et al. do not explicitly teach all of temporally smoothing the bounding box.
However, Yatsenko et al. teach wherein the step of temporally filtering the bounding box over a plurality of image frames comprises temporally smoothing the bounding box ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005] where averaging is smoothing).
Savvides et al. and Yatsenko et al. are combined as per claim 1.
Claim 11
Regarding claim 11, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the step of detecting an object in the image frames comprises applying a machine learning model, such as a neural network, trained to recognize the object ("Conventional machine learning systems can be used for classification and identification, including support vector machines, neural networks, convolutional neural networks, and recurrent neural networks," paragraph [0038]).
Claim 12
Regarding claim 12, Savvides et al. teach the method of stabilizing bounding boxes for objects in a video stream according to claim 1, wherein the step of detecting an object in the image frames comprises processing the image frames to identify predefined features, such as shapes, that are characteristic for the object ("identification processing modules object can be used for fitting objects into predetermined categories, such as "human", "car", "package", "pedestrian", "pet", "others", or the special "none" category," paragraph [0038]).
Claim 13
Regarding claim 13, Savvides et al. teach a non-transitory computer readable recording medium comprising a computer program having instructions which, when executed by a computing device or computing system, cause the computing device or computing system to carry out a method of stabilizing bounding boxes for objects in a video stream ("The bounding boxes can be spatially and temporally filtered to eliminate potential objects," paragraph [0004]), the method comprising:
receiving a video stream comprising a sequence of image frames ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]);
detecting an object in the image frames and generating a bounding box surrounding the object ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]);
measuring a noise level for the video stream ("Too small of bounding boxes are likely to correspond to individual pixels or tiny blobs, rather than objects of interests. Such bounding boxes may result from noise, for example, or from small changes in the lighting conditions, rather than an actual object moving across the scene, and may be safely removed," paragraph [0034] where too small teaches a noise level); and
temporally filtering the bounding box over a plurality of image frames based on the measured noise level, thereby stabilizing the bounding box in the video stream ("Temporally filtering the bounding boxes can include object motion analysis and object tracking," paragraph [0008]),
wherein the bounding box is temporally filtered over a number of preceding image frames ("In addition, an object tracker module 422 can be used to eliminate bounding boxes that are not tracked in multiple frames," paragraph [0036] where not tracked in multiple frames teaches filtering over preceding frames).
Savvides et al. is not relied upon to teach all of adapting the number of preceding image frames to the measured noise level.
However, Yatsenko et al. teach the number of preceding image frames is adapted such that a higher noise level implies a temporal filtering over a greater number of preceding image frames and a lower noise level implies a temporal filtering over a smaller number of preceding image frames ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005]).
Savvides et al. and Yatsenko et al. are combined as per claim 1.
Claim 14
Regarding claim 14, Savvides et al. teach an image processing system ("The bounding boxes can be spatially and temporally filtered to eliminate potential objects," paragraph [0004]) comprising:
at least one camera ("FIG. 4 illustrates an embodiment of method 400 suitable for operation on the camera system 100 of FIG. 1," paragraph [0028]); and
a processing unit ("Memory 102 provides run-time memory support for processor 101, such as frame buffers for image processing operations," paragraph [0021]) configured to:
receive a video stream comprising a sequence of image frames from the camera ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]);
detect an object in the image frames and generate a bounding box surrounding the object ("a video monitoring method or system includes modules capable of determining motion changes in a set of video frames to find potential objects," paragraph [0004]); and
measure a noise level for the video stream ("Too small of bounding boxes are likely to correspond to individual pixels or tiny blobs, rather than objects of interests. Such bounding boxes may result from noise, for example, or from small changes in the lighting conditions, rather than an actual object moving across the scene, and may be safely removed," paragraph [0034] where too small teaches a noise level);
temporally filter the bounding box over a plurality of image frames based on the measured noise level to stabilize the bounding box in the video stream ("Temporally filtering the bounding boxes can include object motion analysis and object tracking," paragraph [0008]),
wherein the processing unit is configured to temporally filter the bounding box over a number of preceding image frames ("In addition, an object tracker module 422 can be used to eliminate bounding boxes that are not tracked in multiple frames," paragraph [0036] where not tracked in multiple frames teaches filtering over preceding frames),
Savvides et al. do not explicitly teach all of adapting the number of preceding image frames to the measured noise level.
However, Yatsenko et al. teach characterized in that the number of preceding image frames is adapted such that a higher noise level implies a temporal filtering over a greater number of preceding image frames and a lower noise level implies a temporal filtering over a smaller number of preceding image frames ("Temporal filtration compares a current value of a target pixel with previous values of the same target pixel. A temporal filter may recalculate a noisy pixel by comparing the current value of the target pixel with previous values of the target pixel. That is, a temporal filter may replace a noisy pixel value with an average value of that pixel from several previous frames," paragraph [0005]).
Savvides et al. and Yatsenko et al. are combined as per claim 1.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent 10,511,846 B1 to Chen et al. discloses video frames received from a video capture device are divided into a plurality of 16x16 blocks. For each source block, a moving object detection process and a noise estimation process are performed. Then temporal denoising is adaptively applied to the blocks of the source frame based on the noise estimation and moving object detection. The adaptively filtered blocks are provided to an output frame and forwarded to a coding module for encoding.
US Patent Publication 2024 0193789 A1 to Bandwar et al. discloses receiving a first image frame and a second image frame, determining a motion map indicating motion of objects within the first and second image frames. Additionally, motion hotspots may be identified within the second image frame based on the motion map. A temporal filtering process may be applied to portions of the second image frame located within motion hotspots to generate a corrected image frame.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 17 February 2026